Con Edison

Consolidated Edison Co. of New York (Con Edison) is a regulated utility serving 3.2 million electric customers in New York City and Westchester County. The company recognized that it could realize significant cost savings if more customers would adopt electronic billing, where bills are delivered electronically without a paper version. Eliminating the printing, postage, labor and equipment costs associated with paper billing can result in significant cost savings.

In addition to operational cost savings, further positive results could be gained from driving e-bill adoption, including improved customer relationships and fewer billing-related service calls. According to a Harris Interactive study conducted for CheckFree Research Services, customers who receive e-bills at a biller organization’s website show higher satisfaction levels, with 25 percent of them reporting an improved relationship with their biller as a result of receiving e-bills.

The challenge was how to attract more customers to the low-cost, high-impact online channel for billing activities and shut off their paper bills. To convince customers to change their behavior, Con Edison had to find a way to cost-effectively generate widespread awareness of electronic billing and explain how benefits, such as saving time, reducing clutter and helping the environment, outweigh concerns customers may have about giving up their paper bills.

THE ADVANTAGES OF ELECTRONIC BILLING

Together, Con Edison and CheckFree developed a comprehensive marketing campaign designed to communicate the advantages of electronic billing to as many customers as possible. As a critical first step, Con Edison gained cross-organizational alignment regarding the campaign strategy. Drawing from a longstanding commitment to the environment, the company made a strategic decision to implement an ongoing campaign that conveyed a “Go green with e-bills” message across numerous channels in order to maximize reach within its customer base. Research has shown that when attempting to change consumer behavior, a comprehensive, consistent and widespread marketing campaign is far more effective than “one-off” campaigns utilizing minimal tactics.

In May 2007, Con Edison launched the integrated marketing campaign capitalizing on the wave of consumer awareness on environmental issues. Con Edison promoted paperless billing and electronic payment through a variety of methods and channels, including:

  • Customer Emails;
  • Direct-mail postcards;
  • On-hold messaging;
  • Radio advertising;
  • Invoice messaging;
  • Press releases;
  • Con Edison website messaging;
  • My CheckFree website messaging;
  • Customer newsletters; and
  • Internal employee newsletters.

Each communication featured the company’s environmental incentive – for every customer choosing the paper-saving option of viewing and paying their bills online, Con Edison would donate $1 to a local, nonprofit tree-planting fund to help the environment in New York.

To aid in driving awareness, Con Edison made a deliberate decision to create an extended campaign designed to consistently reinforce the safety, security, simplicity and environmental benefits of electronic billing. Based on the success of the marketing activities seen thus far, Con Edison plans to include the “Go green with e-bills” theme in every consumer communication going forward.

THE RESULTS

Con Edison showed persistence and enthusiasm in pursuing a multichannel marketing campaign, and it was well worth the effort. In the first seven months after the campaign was launched, Con Edison generated impressive results, including the following:

  • More than 42,000 e-bills activated;
  • A 57 percent increase in e-bill activations over the same time period in 2006; and
  • A 19 percent increase in online e-bill payments over the same time period in 2006.

Con Edison also has benefited from the positive press and goodwill it’s created in the community. By providing its customers with a better, more environmentally friendly choice for paying and receiving their utility bills, Con Edison is minimizing costs, maintaining operational control, optimizing growth for its business and turning customer interactions into profitable relationships.

SmartGridNet Architecture for Utilities

With the accelerating movement toward distributed generation and the rapid shift in energy consumption patterns, today’s power utilities are facing growing requirements for improved management, capacity planning, control, security and administration of their infrastructure and services.

UTILITY NETWORK BUSINESS DRIVERS

These requirements are driving a need for greater automation and control throughout the power infrastructure, from generation through the customer site. In addition, utilities are interested in providing end-customers with new applications, such as advanced metering infrastructure (AMI), online usage reports and outage status. In addition to meeting these requirements, utilities are under pressure to reduce costs and automate operations, as well as protect their infrastructures from service disruption in compliance with homeland security requirements.

To succeed, utilities must seamlessly support these demands with an embedded infrastructure of traditional devices and technologies. This will allow them to provide a smooth evolution to next-generation capabilities, manage life cycle issues for aging equipment and devices, maintain service continuity, minimize capital investment, and ensure scalability and future-proofing for new applications, such as smart metering.

By adopting an evolutionary approach to an intelligent communications network (SmartGridNet), utilities can maximize their ability to leverage the existing asset base and minimize capital and operations expenses.

THE NEED FOR AN INTELLIGENT UTILITY NETWORK

As a first step toward implementing a SmartGridNet, utilities must implement intelligent electronic devices (IEDs) throughout the infrastructure – from generation and transmission through distribution directly to customer premises – if they are to effectively monitor and manage facilities, load and usage. A sophisticated operational communications network then interconnects such devices through control centers, providing support for supervisory control and data acquisition (SCADA), teleprotection, remote meter reading, and operational voice and video. This network also enables new applications such as field personnel management and dispatch, safety and localization. In addition, the utility’s corporate communications network increases employee productivity and improves customer service by providing multimedia; voice, video, and data communications; worker mobility; and contact center capabilities.

These two network types – operational and corporate – and the applications they support may leverage common network facilities; however, they have very different requirements for availability, service assurance, bandwidth, security and performance.

SMARTGRIDNET REQUIREMENTS

Network technology is critical to the evolution of the next-generation utility. The SmartGridNet must support the following key requirements:

  • Virtualization. Enables operation of multiple virtual networks over common infrastructure and facilities while maintaining mutual isolation and distinct levels of service.
  • Quality of service (QoS). Allows priority treatment of critical traffic on a “per-network, per-service, per-user basis.”
  • High availability. Ensures constant availability of critical communications, transparent restoration and “always on” service – even when the public switched telephony network (PSTN) or local power supply suffers outages.
  • Multipoint-to-multipoint communications. Provides integrated control and data collection across multiple sensors and regulators via synchronized, redundant control centers for disaster recovery.
  • Two-way communications. Supports increasingly sophisticated interactions between control centers and end-customers or field forces to enable new capabilities, such as customer sellback, return or credit allocation for locally stored power; improved field service dispatch; information sharing; and reporting.
  • Mobile services. Improves employee efficiency, both within company facilities and in the field.
  • Security. Protects the infrastructure from malicious and inadvertent compromise from both internal and external sources, ensures service reliability and continuity, and complies with critical security regulations such as North American Electric Reliability Corp. (NERC).
  • Legacy service integration. Accommodates the continued presence of legacy remote terminal units (RTUs), meters, sensors and regulators, supporting circuit, X.25, frame relay (FR), and asynchronous transfer mode (ATM) interfaces and communications.
  • Future-proofing. Capability and scalability to meet not just today’s applications, but tomorrow’s, as driven by regulatory requirements (such as smart metering) and new revenue opportunities, such as utility delivery of business and residential telecommunications (U-Telco) services.

SMARTGRIDNET EVOLUTION

A number of network technologies – both wire-line and wireless – work together to achieve these requirements in a SmartGridNet. Utilities must leverage a range of network integration disciplines to engineer a smooth transformation of their existing infrastructure to a SmartGridNet.

The remainder of this paper describes an evolutionary scenario, in which:

  • Next-generation synchronous optical network (SONET)-based multiservice provisioning platforms (MSPPs), with native QoS-enabled Ethernet capabilities are seamlessly introduced at the transport layer to switch traffic from both embedded sensors and next-generation IEDs.
  • Cost-effective wave division multiplexing (WDM) is used to increase communications network capacity for new traffic while leveraging embedded fiber assets.
  • Multiprotocol label switching (MPLS)/ IP routing infrastructure is introduced as an overlay on the transport layer only for traffic requiring higher-layer services that cannot be addressed more efficiently by the transport layer MSPPs.
  • Circuit emulation over IP virtual private networks (VPNs) is supported as a means for carrying sensor traffic over shared or leased network facilities.
  • A variety of communications applications are delivered over this integrated infrastructure to enhance operational efficiency, reliability, employee productivity and customer satisfaction.
  • A toolbox of access technologies is appropriately applied, per specific area characteristics and requirements, to extend power service monitoring and management all the way to the end-customer’s premises.
  • A smart home network offers new capabilities to the end-customer, such as Advanced Metering Infrastructure (AMI), appliance control and flexible billing models.
  • Managed and assured availability, security, performance and regulatory compliance of the communications network.

THE SMARTGRIDNET ARCHITECTURE

Figure 1 provides an architectural framework that we may use to illustrate and map the relevant communications technologies and protocols.

The backbone network in Figure 1 interconnects corporate sites and data centers, control centers, generation facilities, transmission and distribution substations, and other core facilities. It can isolate the distinct operational and corporate communications networks and subnetworks while enforcing the critical network requirements outlined in the section above.

The underlying transport network for this intelligent backbone is made up of both fiber and wireless (for example, microwave) technologies. The backbone also employs ring and mesh architectures to provide high availability and rapid restoration.

INTELLIGENT CORE TRANSPORT

As alluring as pure packet networks may be, synchronous SONET remains a key technology for operational backbones. Only SONET can support the range of new and legacy traffic types while meeting the stringent absolute delay, differential delay and 50-millisecond restoration requirements of real-time traffic.

SONET transport for legacy traffic may be provided in MSPPs, which interoperate with embedded SONET elements to implement ring and mesh protection over fiber facilities and time division multiplexing (TDM)-based microwave. Full-featured Ethernet switch modules in these MSPPs enable next-generation traffic via Ethernet over SONET (EOS) and/or packet over SONET (POS). Appropriate, cost-effective wave division multiplexing (WDM) solutions – for example, coarse, passive and dense WDM – may also be applied to guarantee sufficient capacity while leveraging existing fiber assets.

BACKBONE SWITCHING/ROUTING

From a switching and routing perspective, a significant amount of traffic in the backbone may be managed at the transport layer – for example, via QoS-enabled Ethernet switching capabilities embedded in the SONET-based MSPPs. This is a key capability for supporting expedited delivery of critical traffic types, enabling utilities to migrate to more generic object-oriented substation event (GOOSE)-based inter-substation communications for SCADA and teleprotection in the future in accordance with standards such as IEC 61850.

Where higher-layer services – for example, IP VPN, multicast, ATM and FR – are required, however, utilities can introduce a multi-service switching/routing infrastructure incrementally on top of the transport infrastructure. The switching infrastructure is based on multi-protocol label switching (MPLS), implementing Layer 2 transport encapsulation and/or IP VPNs, per the relevant Internet engineering task force (IETF) requests for comments (RFCs).

This type of unified infrastructure reduces operations costs by sharing switching and restoration capabilities across multiple services. Current IP/MPLS switching technology is consistent with the network requirements summarized above for service traffic requiring higher-layer services, and may be combined with additional advanced services such as Layer 3 VPNs and unified threat management (UTM) devices/firewalls for further protection and isolation of traffic.

CORE COMMUNICATIONS APPLICATIONS

Operational services such as tele-protection and SCADA represent key categories of applications driving the requirements for a robust, secure, cost-effective network as described. Beyond these, there are a number of communications applications enabling improved operational efficiency for the utility, as well as mechanisms to enhance employee productivity and customer service. These include, but are not limited to:

  • Active network controls. Improves capacity and utilization of the electricity network.
  • Voice over IP (VoIP). Leverages common network infrastructure to reduce the cost of operational and corporate voice communications – for example, eliminating costly channel banks for individual lines required at remote substations.
  • Closed circuit TV (CCTV)/Video Over IP. Improves surveillance of remote assets and secure automated facilities.
  • Multimedia collaboration. Combines voice, video and data traffic in a rich application suite to enhance communication and worker productivity, giving employees direct access to centralized expertise and online resources (for example, standards and diagrams).
  • IED interconnection. Better measures and manages the electricity networks.
  • Mobility. Leverages in-plant and field worker mobility – via cellular, land mobile radio (LMR) and WiFi – to improve efficiency of key work processes.
  • Contact center. Employs next-generation communications and best-in-class customer service business processes to improve customer satisfaction.

DISTRIBUTION AND ACCESS NETWORKS

The intelligent utility distribution and access networks are subtending networks from the backbone, accommodating traffic between backbone switches/applications and devices in the distribution infrastructure all the way to the customer premises. IEDs on customer premises include automated meters and device regulators to detect and manage customer power usage.

These new devices are primarily packet-based. They may, therefore, be best supported by packet-based access network technologies. However, for select rings, TDM may also be chosen, as warranted. The packet-based access network technology chosen depends on the specifics of the sites to be connected and the economics associated with that area (for example, right of way, customer densities and embedded infrastructure).

Regardless of the access and last-mile network designs, traffic ultimately arrives at the network via an IP/MPLS edge switch/router with connectivity to the backbone IP/MPLS infrastructure. This switching/routing infrastructure ensures connectivity among the intelligent edge devices, core capabilities and control applications.

THE SMART HOME NETWORK

A futuristic home can support many remotely controlled and managed appliances centered on lifestyle improvements of security, entertainment, health and comfort (see Figure 2). In such a home, applications like smart meters and appliance control could be provided by application service providers (ASPs) (such as smart meter operators or utilities), using a home service manager and appropriate service gateways. This architecture differentiates between the access provider – that is, the utility/U-Telco or other public carrier – and the multiple ASPs who may provide applications to a home via the access provider.

FLEXIBLE CHARGING

By employing smart meters and developing the ability to retrieve electricity usage data at regular intervals – potentially several readings per hour – retailers could make billing a significant competitive differentiator. detailed usage information has already enabled value-added billing in the telecommunications world, and AMI can do likewise for billing electricity services. In time, electricity users will come to expect the same degree of flexible charging with their electricity bill that they already experience with their telephone bills, including, for example, prepaid and post-paid options, tariff in function of time, automated billing for house rental (vacation), family or group tariffs, budget tariffs and messaging.

MANAGING THE COMMUNICATIONS NETWORK

For utilities to leverage the communications network described above to meet business key requirements, they must intelligently manage that network’s facilities and services. This includes:

  • Configuration management. Provisioning services to ensure that underlying switching/routing and transport requirements are met.
  • Fault and performance management. Monitoring, correlating and isolating fault and performance data so that proactive, preventative and reactive corrective actions can be initiated.
  • Maintenance management. Planning of maintenance activities, including material management and logistics, and geographic information management.
  • Restoration management. Creating trouble tickets, dispatching and managing the workforce, and carrying out associated tracking and reporting.
  • Security management. Assuring the security of the infrastructure, managing access to authorized users, responding to security events, and identifying and remediating vulnerabilities per key security requirements such as NERC.

Utilities can integrate these capabilities into their existing network management infrastructures, or they can fully or partially outsource them to managed network service providers.

Figure 3 shows how key technologies are mapped to the architectural framework described previously. Being able to evolve into an intelligent utilities network in a cost-effective manner requires trusted support throughout planning, design, deployment, operations and maintenance.

CONCLUSION

Utilities can evolve their existing infrastructures to meet key SmartGridnet requirements by effectively leveraging a range of technologies and approaches. Through careful planning, designing, engineering and application of this technology, such firms may achieve the business objectives of SmartGridnet while protecting their current investments in infrastructure. Ultimately, by taking an evolutionary approach to SmartGridnet, utilities can maximize their ability to leverage the existing asset base as well as minimize capital and operations expenses.

Pepco Holdings, Inc.

The United States and the world are facing two preeminent energy challenges: the rising cost of energy and the impact of increasing energy use on the environment. As a regulated public utility and one of the largest energy delivery companies in the Mid-Atlantic region, Pepco Holdings Inc. (PHI) recognized that it was uniquely positioned to play a leadership role in helping meet both of these challenges.

PHI calls the plan it developed to meet these challenges the Blueprint for the Future (Blueprint). The plan builds on work already begun through PHI’s Utility of the Future initiative, as well as other programs. The Blueprint focuses on implementing advanced technologies and energy efficiency programs to improve service to its customers and enable them to manage their energy use and costs. By providing tools for nearly 2 million customers across three states and the district of Columbia to better control their electricity use, PHI believes it can make a major contribution to meeting the nation’s energy and environmental challenges, and at the same time help customers keep their electric and natural gas bills as low as possible.

The PHI Blueprint is designed to give customers what they want: reasonable and stable energy costs, responsive customer service, power reliability and environmental stewardship.

PHI is deploying a number of innovative technologies. Some, such as its automated distribution system, help to improve reliability and workforce productivity. Other systems, including an advanced metering infrastructure (AMI), will enable customers to monitor and control their electricity use, reduce their energy costs and gain access to innovative rate options.

PHI’s Blueprint is both ambitious and complex. Over the next five years PHI will be deploying new technologies, modifying and/or creating numerous information systems, redefining customer and operating work processes, restructuring organizations, and managing relationships with customers and regulators in four jurisdictions. PHI intends to do all of this while continuing to provide safe and reliable energy service to its customers.

To assist in developing and executing this plan, PHI reached out to peer utilities and vendors. One significant “partner” group is the Global Intelligent Utility network Coalition (GIUNC), established by IBM, which currently includes CenterPoint Energy (Texas), Country Energy (new South Wales, Australia) and PHI.

Leveraging these resources and others, PHI managers spent much of 2007 compiling detailed plans for realizing the Blueprint. Several aspects of these planning efforts are described below.

VISION AND DESIGN

In 2007, multiple initiatives were launched to flesh out the many aspects of the Blueprint. As Figure 1 illustrates, all of the initiatives were related and designed to generate a deployment plan based on a comprehensive review of the business and technical aspects of the project.

At this early stage, PHI does not yet have all the answers. Indeed, prematurely committing to specific technologies or designs for work that will not be completed for five years can raise the risk of obsolescence and lost investment. The deployment plan and system map, discussed in more detail below, are intended to serve as a guide. They will be updated and modified as decision points are reached and new information becomes available.

BUSINESS CASE VALIDATION

One of the first tasks was to review and define in detail the business case analyses for the project components. Both benefit assumptions and implementation costs were tested. Reference information (benchmarks) for this review came from a variety of sources: IBM experience in projects of similar scope and type; PHI materials and analysis; experiences reported by other GIUNC members; and other utilities and other publicly available sources. This information was compiled, and a present value analysis was conducted on discounted cash flow and rate of return, as shown in Figure 2.

In addition to an “operational benefits” analysis, PHI and the Brattle Group developed value assessments associated with demand response offerings such as critical peak pricing. With demand response, peak consumption can be reduced and capacity cost avoided. This means lower total energy prices for customers and less new capacity additions in the market. As Figure 2 shows, in even the worst-case scenario for demand response savings, operational and customer benefits will offset the cost of PHI’s AMI investment.

The information from these various cases has since been integrated into a single program management tool. Additional capabilities for optimizing results based on value, cost and schedule were developed. Finally, dynamic relationships between variables were modeled and added to the tool, recognizing that assumptions don’t always remain constant as plans are changed. One example of this would be the likely increase in call center cost per meter when deployment accelerates and customer inquiries increase.

HIGH-LEVEL COMMUNICATIONS ARCHITECTURE DESIGN

To define and develop the communications architecture, PHI deployed a structured approach built around IBM’s proprietary optimal comparative communications architecture methodology (OCCAM). This methodology established the communications requirements for AMI, data architecture and other technologies considered in the Blueprint. Next, an evaluation of existing communications infrastructure and capabilities was conducted, which could be leveraged in support of the new technologies. Then, alternative solutions to “close the gap” were reviewed. Finally, all of this information was incorporated in an analytical tool that matched the most appropriate communication technology within a specified geographic area and business need.

SYSTEM MAP AND INFORMATION MODEL

Defining the data framework and the approach to overall data integration elements across the program areas is essential if companies are to effectively and efficiently implement AMI systems and realize their identified benefits.

To help PHI understand what changes are needed to get from their current state to a shared vision of the future, the project team reviewed and documented the “current state” of the systems impacted by their plans. Then, subject matter experts with expertise in meters, billing, outage, system design, work and workforce management, and business data analysis were engaged to expand on the data architecture information, including information on systems, functions and the process flows that tie them all together. Finally, the information gathered was used to develop a shared vision of how PHI processes, functions, systems and data will fit together in the future.

By comparing the design of as-is systems with the to-be architecture of information management and information flows, PHI identified information gaps and developed a set of next steps. One key step establishes an “enterprise architecture” model for development. The first objective would be to establish and enforce governance policies. With these in place, PHI will define, draft and ratify detailed enterprise architecture and enforce priorities, standards, procedures and processes.

PHASE 2 DEPLOYMENT PLAN

Based on the planning conducted over the last half of the year, a high-level project plan for Phase 2 deployment was compiled. The focus was mainly on Blueprint initiatives, while considering dependencies and constraints reported in other transformation initiatives. PHI subject matter experts, project team leads and experience gathered from other utilities were all leveraged to develop the Blueprint deployment plan.

The deployment plan includes multiple types of tasks; processes; and organization, technical and project management office-related activities, and covers a period of five to six years. Initiatives will be deployed in multiple releases, phased across jurisdictions (Delaware, District of Columbia, Maryland, New Jersey) and coordinated between meter installation and communications infrastructure buildout schedules.

The plan incorporates several initiatives, including process design, system development, communications infrastructure and AMI, and various customer initiatives. Because these initiatives are interrelated and complex, some programmatic initiatives are also called for, including change management, benefits realization and program management. From this deployment plan, more detailed project plans and dependencies are being developed to provide PHI with an end-to-end view of implementation.

As part of the planning effort, key risk areas for the Blueprint program were also defined, as shown in Figure 3. Input from interviews and knowledge leveraged from similar projects were included to ensure a comprehensive understanding of program risks and to begin developing mitigation strategies.

CONCLUSION

As PHI moves forward with implementation of its AMI systems, new issues and challenges are certain to arise, and programmatic elements are being established to respond. A program management office has been established and continues to drive more detail into plans while tracking and reporting progress against active elements. AMI process development is providing the details for business requirements, and system architecture discussions are resolving interface issues.

Deployment is still in its early stages, and much work lies ahead. However, with the effort grounded in a clear vision, the journey ahead looks promising.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

  1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:
    • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
    • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
    • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.
  2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:
    • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
    • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
    • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.
  3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.
    • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
    • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
    • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
    • >Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.
  4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:
    • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
    • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
    • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
    • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
    • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red flags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

Despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

  1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

Tomorrow’s Bill Payment Solutions for Today’s Businesses

Providing consumers with innovative services for more than 150 years, Western Union is an established leader in electronic and cash bill-payment solutions. We introduced our first consumer-to-consumer money transfer service in 1871 and began offering consumer-to-business bill payment services in 1989 with the introduction of the Western Union Quick Collect® service, providing consumers in the United States with convenient walk-in agent network locations where they can pay bills in cash.

In 2008, our comprehensive suite of services has grown to include Speedpay® – an electronic bill payment option that provides businesses with Internet, IVR, desktop, mobile payments, online banking and call center solutions, as well as e-bill presentment with payments and interactive outbound messaging integrated with payment processing.

THE CONSUMER-TO-BUSINESS SEGMENT

Western Union’s electronic and cash bill payment services provide consumers with fast, convenient ways to send one-time or recurring payments to a broad spectrum of industries. At Western Union we have relationships with more than 6,000 businesses and organizations that receive consumer payments, including utilities, auto finance companies, mortgage servicers, financial service providers and government agencies. These relationships form a core component of our consumer-to-business payment service and are one reason we were able to process 404 million consumer-to-business transactions in 2007.

PORTFOLIO OF SERVICES

Our consumer-to-business services give consumers choices in payment type and method, and include the following options:

  • Electronic payments. Consumers and billers use our Speedpay® service in the United States and the United Kingdom to make consumer payments to a variety of billers using credit cards, ATM cards and debit cards, and via ACH withdrawal. Payments are initiated through multiple channels, including biller-hosted websites, westernunion.com, IVR units, Online Banking websites and call centers.
  • Cash payments. Consumers use our Quick Collect® or Prepaid® services to send guaranteed funds to businesses and government agencies using cash (and in select locations, debit cards). Quick Collect is available at nearly 60,000 Western Union agent locations across the United States and Canada, while our Prepaid service can be accessed at more than 40,000 U.S. locations. Consumers can also use our Convenience Pay® service to send payments by cash or check from a smaller number of agent locations primarily to utilities and telecommunication providers.

DISTRIBUTION AND MARKETING CHANNELS

Our electronic payment services are available primarily through an IVR, over the Internet and via Call Center using a desktop application while speaking with a biller’s customer service representative. Through our Quick Pay® service, it is possible to receive payments sent from outside the United States or Canada from over 320,000 agent locations in more than 200 countries and territories around the world. We work in partnership with our billers to market our services to consumers in a number of ways, including direct mail, Email, Internet and point-of-sale advertising.

ONLINE BANKING

In late 2007, Western Union launched its Online Banking initiative, helping to change the way consumers pay their bills. The channel accelerates the speed with which billers receive payment from two to four days to a next-day or same-day delivery, and enables Western Union Payment Services to process bill payments initiated by consumers from their banks’ online banking sites.

Western Union plans to work with the nation’s largest banks to provide your customers with a new class of online banking payment that allows them to make same- and next-day payments that are posted and funded to you faster and are of a higher quality than other online banking payments currently available.

EMAIL BILL PRESENTMENT AND PAYMENT

While the benefits of electronic bill presentment and payment are compelling for both billers and consumers, low consumer adoption rates have prevented billers from fully realizing the cost savings and improved customer service levels these services promote. Western Union® Payment Services aims to change this through its integration with Striata® Email bill presentment and payment (EBPP) solutions.

With this integrated, encrypted Email bill presentment and one-click payment service, consumers no longer need to register to receive their bill electronically, visit a separate website to download the bill and send a payment, or remember multiple user names and passwords. By removing these extra steps from the process, these services become dramatically easier to use for consumers.

The critical differentiator of the Western Union/Striata service is that the entire e-bill is delivered directly into the consumer’s in-box as an encrypted off-line attachment, enabling payment to be sent through the e-bill itself using the Western Union® Speedpay service. While complementary to existing online presentment solutions, this “push” Email billing offering can be more successful at driving adoption.

Bill Pay and Presentment Solutions for Utility Companies

Recognizing that not all customers view and pay bills in the same way, Check- Free helps you deliver a complete range of billing and payment options – from the traditional methods of receiving and paying bills by mail, in person and over the phone to complete paperless online billing and payment using either a bank or your website. CheckFree offers solutions that help you meet market demands.

Whether you need to improve a single solution or your entire offering, CheckFree can offer experience and expertise in the following payment channels:

  • By Mail. Some people still choose to receive paper bills and write checks. CheckFree can help turn these paper checks into ACH electronic debits, speeding payment collections.
  • In Person. Give your customers in-person payment convenience and choice to use cash, checks, money orders or merchant-issued certificates.
  • By Phone. Enable your customers to pay a bill anywhere they have access to a phone, all day, every day. With the recent acquisition of CheckFree by Fiserv, you can look for Fiserv’s industry-leading BillMatrix platform to be integrated into our suite of offerings.
  • Online. Deliver bill paying ease and convenience through CheckFree’s full range of electronic billing and payment (EBP) solutions at your site and beyond your site.
  • Emergency Payments. Offer a fee-based option for last-minute online payments and eliminate expenses due to delinquent payments.
  • Electronic Remittance. Provide quicker access to payment funds while reducing the cost of processing paper checks.

CUSTOMER INTERACTION OPTIMIZATION

CheckFree solutions enable you to optimize each customer interaction by offering multiple payment channel options that focus on security, reliability, functionality and convenience. Each interaction with the consumer represents an ideal opportunity to enhance the customer experience and build loyal customers.

Our Customer Interaction Optimization solutions make interactions a win/win for both you and your customers. You deliver the payment channels they seek while maintaining the ability to guide them to the most profitable channel for your organization. The ultimate business objective is to steer customers to the lower cost-to-serve billing and payment option: the online channel.

CheckFree understands your company’s strategic need to direct consumers to the optimal online channel to enhance revenue growth through reductions in operating costs. By investing in substantial consumer behavior, segmentation and marketing research, CheckFree can assist with creating marketing campaigns focused on promoting your online channel. Every bill received, payment made or visit to your website can be utilized to strategically drive adoption of online bill pay, e-bills and paper shut-off.

For more than 25 years, CheckFree has been a leading provider of electronic billing and payment services. We process more than one billion electronic payments each year. With CheckFree’s Customer Interaction Optimization solutions, you can enhance your payment offerings while improving your bottom line.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. Alcatel-Lucent provides solutions that enable service providers, enterprises and governments worldwide to deliver voice, data and video communication services to end users. As a leader in carrier and enterprise IP technologies; fixed, mobile and converged broadband access; applications and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry and includes Bell labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

YOUR ENERGY AND UTILITY PARTNER

Alcatel-Lucent offers comprehensive capabilities that combine carrier-grade communications technology and expertise with utility industry- specific knowledge. Alcatel-Lucent’s IP transformation expertise and utility market-specific knowledge have led to the development of turnkey communications solutions designed for the energy and utility market. Alcatel-Lucent has extensive experience in:

  • Transforming and renewing network technologies;
  • designing and implementing SmartGrid initiatives;
  • Meeting NERC CIP compliance and security requirements;
  • Working in live power generation, transmission and distribution environments;
  • Implementing and managing complex mission-critical communications projects;
  • developing best-in-class partnerships with organizations like CURRENT Communications, Ambient, BelAir networks, Alvarion and others in the utility industry.

Working with Alcatel-Lucent enables energy and utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for – and minimizing the risks associated with – moving to SmartGrid solutions. And Alcatel-Lucent helps energy and utility companies achieve compliance with regulatory requirements and reduce operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services.

ALCATEL-LUCENT IP MPLS SOLUTION FOR THE NEXT-GENERATION UTILITY NETWORK

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain fl awless service delivery. The Alcatel-Lucent IP/MPLS solution can enable utility operators to extend and enhance their networks with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both capital expenditures and operating expenses without jeopardizing reliability. Advanced technologies also allow the introduction of new applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting-edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

THE ALCATEL-LUCENT ADVANTAGE

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation communications networks.

Delivering the Tools for Creating the Next-Generation Electrical SmartGrid

PowerSense delivers cutting-edge monitoring and control equipment together with integrated supervision to enable the modern electrical utility to prepare its existing power infrastructure for tomorrow’s SmartGrid.

PowerSense uses world-leading technology to merge existing and new power infrastructures into the existing SCADA and IT systems of the electrical utilities. This integration of the upgraded power infrastructure and existing IT systems instantly optimizes outage and fault management, thereby decreasing customer minutes lost (the System Average Interruption duration Index, or SAIDI).

At the same time, this integration helps the electrical utility further improve asset management (resulting in major cost savings) and power management (resulting in high-performance outage management and a high power efficiency). The PowerSense product line is called DISCOS® (for distribution networks, Integrated Supervision and Control System).

Discos®

The following outlines the business and system values offered by the DISCOS® product line.

Business Values

  • Cutting-edge optical technology (the sensor)
  • Easily and safely retrofitted (sensors can be fitted into all transformer types)
  • End-to-end solutions (from sensors to laptop)
  • Installation in steps (implementation based on cost-benefit analysis) system Values
  • Current (for each phase)
  • Voltage (for each phase)
  • Frequency
  • Power active, reactive and direction
  • Distance-to-fault measurement
  • Control of breakers and service relays
  • Analog inputs
  • Measurement of harmonic content for I and V
  • Measurement of earth fault

These parameters are available for both medium- and low-voltage power lines.

OPTICAL SENSOR TECHNOLOGY

With its stability and linearity, PowerSense’s cutting-edge sensor technology is setting new standards for current measurements in general. For PowerSense’s primary business area of MV grid monitoring in particular, it is creating a completely new set of standards for how to monitor the MV power grid.

The DISCOS® Current Sensor is part of the DISCOS® Opti module. The DISCOS® Sensor monitors the current size and angle on both the LV and MV side of the transformer.

BASED ON THE FARADAY EFFECT

Today, only a few applications in measuring instruments are based on the Faraday rotation principle. For instance, the Faraday effect has been used for measuring optical rotary power, for amplitude modulation of light and for remote sensing of magnetic fields.

now, due to advanced computing techniques, PowerSense is able to offer a low-priced optical sensor based on the Faraday effect.

THE COMPANY

PowerSense A/S was established on September 1, 2006, by DONG Energy A/S (formerly Nesa A/S) as a spin-off of the DISCOS® product line business. The purpose of the spin-off was to ensure the best future business conditions for the DISCOS® product line.

After the spin-off, BankInvest A/S, a Danish investment bank, holds 70 percent of the share capital. DONG Energy A/S continues to hold 30 percent of the share capital.