Pepco Holdings, Inc.

The United States and the world are facing two preeminent energy challenges: the rising cost of energy and the impact of increasing energy use on the environment. As a regulated public utility and one of the largest energy delivery companies in the Mid-Atlantic region, Pepco Holdings Inc. (PHI) recognized that it was uniquely positioned to play a leadership role in helping meet both of these challenges.

PHI calls the plan it developed to meet these challenges the Blueprint for the Future (Blueprint). The plan builds on work already begun through PHI’s Utility of the Future initiative, as well as other programs. The Blueprint focuses on implementing advanced technologies and energy efficiency programs to improve service to its customers and enable them to manage their energy use and costs. By providing tools for nearly 2 million customers across three states and the district of Columbia to better control their electricity use, PHI believes it can make a major contribution to meeting the nation’s energy and environmental challenges, and at the same time help customers keep their electric and natural gas bills as low as possible.

The PHI Blueprint is designed to give customers what they want: reasonable and stable energy costs, responsive customer service, power reliability and environmental stewardship.

PHI is deploying a number of innovative technologies. Some, such as its automated distribution system, help to improve reliability and workforce productivity. Other systems, including an advanced metering infrastructure (AMI), will enable customers to monitor and control their electricity use, reduce their energy costs and gain access to innovative rate options.

PHI’s Blueprint is both ambitious and complex. Over the next five years PHI will be deploying new technologies, modifying and/or creating numerous information systems, redefining customer and operating work processes, restructuring organizations, and managing relationships with customers and regulators in four jurisdictions. PHI intends to do all of this while continuing to provide safe and reliable energy service to its customers.

To assist in developing and executing this plan, PHI reached out to peer utilities and vendors. One significant “partner” group is the Global Intelligent Utility network Coalition (GIUNC), established by IBM, which currently includes CenterPoint Energy (Texas), Country Energy (new South Wales, Australia) and PHI.

Leveraging these resources and others, PHI managers spent much of 2007 compiling detailed plans for realizing the Blueprint. Several aspects of these planning efforts are described below.

VISION AND DESIGN

In 2007, multiple initiatives were launched to flesh out the many aspects of the Blueprint. As Figure 1 illustrates, all of the initiatives were related and designed to generate a deployment plan based on a comprehensive review of the business and technical aspects of the project.

At this early stage, PHI does not yet have all the answers. Indeed, prematurely committing to specific technologies or designs for work that will not be completed for five years can raise the risk of obsolescence and lost investment. The deployment plan and system map, discussed in more detail below, are intended to serve as a guide. They will be updated and modified as decision points are reached and new information becomes available.

BUSINESS CASE VALIDATION

One of the first tasks was to review and define in detail the business case analyses for the project components. Both benefit assumptions and implementation costs were tested. Reference information (benchmarks) for this review came from a variety of sources: IBM experience in projects of similar scope and type; PHI materials and analysis; experiences reported by other GIUNC members; and other utilities and other publicly available sources. This information was compiled, and a present value analysis was conducted on discounted cash flow and rate of return, as shown in Figure 2.

In addition to an “operational benefits” analysis, PHI and the Brattle Group developed value assessments associated with demand response offerings such as critical peak pricing. With demand response, peak consumption can be reduced and capacity cost avoided. This means lower total energy prices for customers and less new capacity additions in the market. As Figure 2 shows, in even the worst-case scenario for demand response savings, operational and customer benefits will offset the cost of PHI’s AMI investment.

The information from these various cases has since been integrated into a single program management tool. Additional capabilities for optimizing results based on value, cost and schedule were developed. Finally, dynamic relationships between variables were modeled and added to the tool, recognizing that assumptions don’t always remain constant as plans are changed. One example of this would be the likely increase in call center cost per meter when deployment accelerates and customer inquiries increase.

HIGH-LEVEL COMMUNICATIONS ARCHITECTURE DESIGN

To define and develop the communications architecture, PHI deployed a structured approach built around IBM’s proprietary optimal comparative communications architecture methodology (OCCAM). This methodology established the communications requirements for AMI, data architecture and other technologies considered in the Blueprint. Next, an evaluation of existing communications infrastructure and capabilities was conducted, which could be leveraged in support of the new technologies. Then, alternative solutions to “close the gap” were reviewed. Finally, all of this information was incorporated in an analytical tool that matched the most appropriate communication technology within a specified geographic area and business need.

SYSTEM MAP AND INFORMATION MODEL

Defining the data framework and the approach to overall data integration elements across the program areas is essential if companies are to effectively and efficiently implement AMI systems and realize their identified benefits.

To help PHI understand what changes are needed to get from their current state to a shared vision of the future, the project team reviewed and documented the “current state” of the systems impacted by their plans. Then, subject matter experts with expertise in meters, billing, outage, system design, work and workforce management, and business data analysis were engaged to expand on the data architecture information, including information on systems, functions and the process flows that tie them all together. Finally, the information gathered was used to develop a shared vision of how PHI processes, functions, systems and data will fit together in the future.

By comparing the design of as-is systems with the to-be architecture of information management and information flows, PHI identified information gaps and developed a set of next steps. One key step establishes an “enterprise architecture” model for development. The first objective would be to establish and enforce governance policies. With these in place, PHI will define, draft and ratify detailed enterprise architecture and enforce priorities, standards, procedures and processes.

PHASE 2 DEPLOYMENT PLAN

Based on the planning conducted over the last half of the year, a high-level project plan for Phase 2 deployment was compiled. The focus was mainly on Blueprint initiatives, while considering dependencies and constraints reported in other transformation initiatives. PHI subject matter experts, project team leads and experience gathered from other utilities were all leveraged to develop the Blueprint deployment plan.

The deployment plan includes multiple types of tasks; processes; and organization, technical and project management office-related activities, and covers a period of five to six years. Initiatives will be deployed in multiple releases, phased across jurisdictions (Delaware, District of Columbia, Maryland, New Jersey) and coordinated between meter installation and communications infrastructure buildout schedules.

The plan incorporates several initiatives, including process design, system development, communications infrastructure and AMI, and various customer initiatives. Because these initiatives are interrelated and complex, some programmatic initiatives are also called for, including change management, benefits realization and program management. From this deployment plan, more detailed project plans and dependencies are being developed to provide PHI with an end-to-end view of implementation.

As part of the planning effort, key risk areas for the Blueprint program were also defined, as shown in Figure 3. Input from interviews and knowledge leveraged from similar projects were included to ensure a comprehensive understanding of program risks and to begin developing mitigation strategies.

CONCLUSION

As PHI moves forward with implementation of its AMI systems, new issues and challenges are certain to arise, and programmatic elements are being established to respond. A program management office has been established and continues to drive more detail into plans while tracking and reporting progress against active elements. AMI process development is providing the details for business requirements, and system architecture discussions are resolving interface issues.

Deployment is still in its early stages, and much work lies ahead. However, with the effort grounded in a clear vision, the journey ahead looks promising.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

  1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:
    • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
    • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
    • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.
  2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:
    • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
    • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
    • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.
  3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.
    • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
    • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
    • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
    • >Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.
  4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:
    • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
    • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
    • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
    • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
    • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red flags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

Despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

  1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

Tomorrow’s Bill Payment Solutions for Today’s Businesses

Providing consumers with innovative services for more than 150 years, Western Union is an established leader in electronic and cash bill-payment solutions. We introduced our first consumer-to-consumer money transfer service in 1871 and began offering consumer-to-business bill payment services in 1989 with the introduction of the Western Union Quick Collect® service, providing consumers in the United States with convenient walk-in agent network locations where they can pay bills in cash.

In 2008, our comprehensive suite of services has grown to include Speedpay® – an electronic bill payment option that provides businesses with Internet, IVR, desktop, mobile payments, online banking and call center solutions, as well as e-bill presentment with payments and interactive outbound messaging integrated with payment processing.

THE CONSUMER-TO-BUSINESS SEGMENT

Western Union’s electronic and cash bill payment services provide consumers with fast, convenient ways to send one-time or recurring payments to a broad spectrum of industries. At Western Union we have relationships with more than 6,000 businesses and organizations that receive consumer payments, including utilities, auto finance companies, mortgage servicers, financial service providers and government agencies. These relationships form a core component of our consumer-to-business payment service and are one reason we were able to process 404 million consumer-to-business transactions in 2007.

PORTFOLIO OF SERVICES

Our consumer-to-business services give consumers choices in payment type and method, and include the following options:

  • Electronic payments. Consumers and billers use our Speedpay® service in the United States and the United Kingdom to make consumer payments to a variety of billers using credit cards, ATM cards and debit cards, and via ACH withdrawal. Payments are initiated through multiple channels, including biller-hosted websites, westernunion.com, IVR units, Online Banking websites and call centers.
  • Cash payments. Consumers use our Quick Collect® or Prepaid® services to send guaranteed funds to businesses and government agencies using cash (and in select locations, debit cards). Quick Collect is available at nearly 60,000 Western Union agent locations across the United States and Canada, while our Prepaid service can be accessed at more than 40,000 U.S. locations. Consumers can also use our Convenience Pay® service to send payments by cash or check from a smaller number of agent locations primarily to utilities and telecommunication providers.

DISTRIBUTION AND MARKETING CHANNELS

Our electronic payment services are available primarily through an IVR, over the Internet and via Call Center using a desktop application while speaking with a biller’s customer service representative. Through our Quick Pay® service, it is possible to receive payments sent from outside the United States or Canada from over 320,000 agent locations in more than 200 countries and territories around the world. We work in partnership with our billers to market our services to consumers in a number of ways, including direct mail, Email, Internet and point-of-sale advertising.

ONLINE BANKING

In late 2007, Western Union launched its Online Banking initiative, helping to change the way consumers pay their bills. The channel accelerates the speed with which billers receive payment from two to four days to a next-day or same-day delivery, and enables Western Union Payment Services to process bill payments initiated by consumers from their banks’ online banking sites.

Western Union plans to work with the nation’s largest banks to provide your customers with a new class of online banking payment that allows them to make same- and next-day payments that are posted and funded to you faster and are of a higher quality than other online banking payments currently available.

EMAIL BILL PRESENTMENT AND PAYMENT

While the benefits of electronic bill presentment and payment are compelling for both billers and consumers, low consumer adoption rates have prevented billers from fully realizing the cost savings and improved customer service levels these services promote. Western Union® Payment Services aims to change this through its integration with Striata® Email bill presentment and payment (EBPP) solutions.

With this integrated, encrypted Email bill presentment and one-click payment service, consumers no longer need to register to receive their bill electronically, visit a separate website to download the bill and send a payment, or remember multiple user names and passwords. By removing these extra steps from the process, these services become dramatically easier to use for consumers.

The critical differentiator of the Western Union/Striata service is that the entire e-bill is delivered directly into the consumer’s in-box as an encrypted off-line attachment, enabling payment to be sent through the e-bill itself using the Western Union® Speedpay service. While complementary to existing online presentment solutions, this “push” Email billing offering can be more successful at driving adoption.

Bill Pay and Presentment Solutions for Utility Companies

Recognizing that not all customers view and pay bills in the same way, Check- Free helps you deliver a complete range of billing and payment options – from the traditional methods of receiving and paying bills by mail, in person and over the phone to complete paperless online billing and payment using either a bank or your website. CheckFree offers solutions that help you meet market demands.

Whether you need to improve a single solution or your entire offering, CheckFree can offer experience and expertise in the following payment channels:

  • By Mail. Some people still choose to receive paper bills and write checks. CheckFree can help turn these paper checks into ACH electronic debits, speeding payment collections.
  • In Person. Give your customers in-person payment convenience and choice to use cash, checks, money orders or merchant-issued certificates.
  • By Phone. Enable your customers to pay a bill anywhere they have access to a phone, all day, every day. With the recent acquisition of CheckFree by Fiserv, you can look for Fiserv’s industry-leading BillMatrix platform to be integrated into our suite of offerings.
  • Online. Deliver bill paying ease and convenience through CheckFree’s full range of electronic billing and payment (EBP) solutions at your site and beyond your site.
  • Emergency Payments. Offer a fee-based option for last-minute online payments and eliminate expenses due to delinquent payments.
  • Electronic Remittance. Provide quicker access to payment funds while reducing the cost of processing paper checks.

CUSTOMER INTERACTION OPTIMIZATION

CheckFree solutions enable you to optimize each customer interaction by offering multiple payment channel options that focus on security, reliability, functionality and convenience. Each interaction with the consumer represents an ideal opportunity to enhance the customer experience and build loyal customers.

Our Customer Interaction Optimization solutions make interactions a win/win for both you and your customers. You deliver the payment channels they seek while maintaining the ability to guide them to the most profitable channel for your organization. The ultimate business objective is to steer customers to the lower cost-to-serve billing and payment option: the online channel.

CheckFree understands your company’s strategic need to direct consumers to the optimal online channel to enhance revenue growth through reductions in operating costs. By investing in substantial consumer behavior, segmentation and marketing research, CheckFree can assist with creating marketing campaigns focused on promoting your online channel. Every bill received, payment made or visit to your website can be utilized to strategically drive adoption of online bill pay, e-bills and paper shut-off.

For more than 25 years, CheckFree has been a leading provider of electronic billing and payment services. We process more than one billion electronic payments each year. With CheckFree’s Customer Interaction Optimization solutions, you can enhance your payment offerings while improving your bottom line.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. Alcatel-Lucent provides solutions that enable service providers, enterprises and governments worldwide to deliver voice, data and video communication services to end users. As a leader in carrier and enterprise IP technologies; fixed, mobile and converged broadband access; applications and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry and includes Bell labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

YOUR ENERGY AND UTILITY PARTNER

Alcatel-Lucent offers comprehensive capabilities that combine carrier-grade communications technology and expertise with utility industry- specific knowledge. Alcatel-Lucent’s IP transformation expertise and utility market-specific knowledge have led to the development of turnkey communications solutions designed for the energy and utility market. Alcatel-Lucent has extensive experience in:

  • Transforming and renewing network technologies;
  • designing and implementing SmartGrid initiatives;
  • Meeting NERC CIP compliance and security requirements;
  • Working in live power generation, transmission and distribution environments;
  • Implementing and managing complex mission-critical communications projects;
  • developing best-in-class partnerships with organizations like CURRENT Communications, Ambient, BelAir networks, Alvarion and others in the utility industry.

Working with Alcatel-Lucent enables energy and utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for – and minimizing the risks associated with – moving to SmartGrid solutions. And Alcatel-Lucent helps energy and utility companies achieve compliance with regulatory requirements and reduce operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services.

ALCATEL-LUCENT IP MPLS SOLUTION FOR THE NEXT-GENERATION UTILITY NETWORK

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain fl awless service delivery. The Alcatel-Lucent IP/MPLS solution can enable utility operators to extend and enhance their networks with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both capital expenditures and operating expenses without jeopardizing reliability. Advanced technologies also allow the introduction of new applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting-edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

THE ALCATEL-LUCENT ADVANTAGE

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation communications networks.

Delivering the Tools for Creating the Next-Generation Electrical SmartGrid

PowerSense delivers cutting-edge monitoring and control equipment together with integrated supervision to enable the modern electrical utility to prepare its existing power infrastructure for tomorrow’s SmartGrid.

PowerSense uses world-leading technology to merge existing and new power infrastructures into the existing SCADA and IT systems of the electrical utilities. This integration of the upgraded power infrastructure and existing IT systems instantly optimizes outage and fault management, thereby decreasing customer minutes lost (the System Average Interruption duration Index, or SAIDI).

At the same time, this integration helps the electrical utility further improve asset management (resulting in major cost savings) and power management (resulting in high-performance outage management and a high power efficiency). The PowerSense product line is called DISCOS® (for distribution networks, Integrated Supervision and Control System).

Discos®

The following outlines the business and system values offered by the DISCOS® product line.

Business Values

  • Cutting-edge optical technology (the sensor)
  • Easily and safely retrofitted (sensors can be fitted into all transformer types)
  • End-to-end solutions (from sensors to laptop)
  • Installation in steps (implementation based on cost-benefit analysis) system Values
  • Current (for each phase)
  • Voltage (for each phase)
  • Frequency
  • Power active, reactive and direction
  • Distance-to-fault measurement
  • Control of breakers and service relays
  • Analog inputs
  • Measurement of harmonic content for I and V
  • Measurement of earth fault

These parameters are available for both medium- and low-voltage power lines.

OPTICAL SENSOR TECHNOLOGY

With its stability and linearity, PowerSense’s cutting-edge sensor technology is setting new standards for current measurements in general. For PowerSense’s primary business area of MV grid monitoring in particular, it is creating a completely new set of standards for how to monitor the MV power grid.

The DISCOS® Current Sensor is part of the DISCOS® Opti module. The DISCOS® Sensor monitors the current size and angle on both the LV and MV side of the transformer.

BASED ON THE FARADAY EFFECT

Today, only a few applications in measuring instruments are based on the Faraday rotation principle. For instance, the Faraday effect has been used for measuring optical rotary power, for amplitude modulation of light and for remote sensing of magnetic fields.

now, due to advanced computing techniques, PowerSense is able to offer a low-priced optical sensor based on the Faraday effect.

THE COMPANY

PowerSense A/S was established on September 1, 2006, by DONG Energy A/S (formerly Nesa A/S) as a spin-off of the DISCOS® product line business. The purpose of the spin-off was to ensure the best future business conditions for the DISCOS® product line.

After the spin-off, BankInvest A/S, a Danish investment bank, holds 70 percent of the share capital. DONG Energy A/S continues to hold 30 percent of the share capital.

Customer Service in the Brave New World of Today’s Utilities

A NEW GENERATION OF CUSTOMER

Today’s utility customers are energy dependant, information driven, technologically advanced, willing to change and environmentally friendly. Their grandparents prompted utilities to develop and offer levelized billing, and their parents created the need for online bill presentment and credit card payment. This new generation of customer is about to usher in a brave new world of utility customer service in which the real-time utility will conduct business 24 hours a day, seven days a week, 365 days a year, and Internet-savvy consumers will have all the capabilities of the current customer service representative. They’ll be able to receive pricing signals and control their utility usage via Internet portals, as well as shop among utilities for the best price and switch providers.

Expectations of system reliability are high today. Ten years ago, when the customer called to let you know their power was out, the call took 20 seconds; today, they expect you to already know that their power is out and be able to provide additional information about the nature and duration of that outage. What’s wrong? Are crews on the way? What’s the ETR? Can you text me when it’s back on? The call that includes these questions (and more) takes three times as long as that phone call 10 years ago. Thankfully, utility technology is coming of age just in time to meet the needs of evolving utility customers.

Many utilities already use automated circuit switchers to monitor lines for potential fault conditions and to react in real time to isolate faults and restore power. Automated metering systems send out “last gasp” outage notifications to outage management systems to predict the location of a problem for quicker restoration of service. Two-way communications systems send signals to smart appliances, system monitoring devices and customer messaging orbs to affect customer usage patterns. Fiber-to-the-home (FTTH) and wireless systems communicate meter usage in near real time to enable monitoring for abnormal consumption patterns. If customers have all of this data at their fingertips, what more will they expect from their utility service professionals? Advanced metering infrastructure (AMI) and two-way communications between customer and utility provider are essential to the future of these innovations. Figure 1 indicates the penetration of advanced metering by region.

A TOUCH OF ORWELL

This brave new world is not without risk. Tremendous amounts of data will be acquired and maintained. Monthly usage habits of consumers can provide incredible insight into customers’ lives – imagine the knowledge that real-time data can provide. As marketers begin to understand the powerful communications channels utilities possess, partnerships will emerge to maximize their value. Privacy laws and regulations defining proper use and misuse of data similar to Customer Private Network Information (CPNI) legislation will emerge just as they did in the telecommunications industry. Thus, it would be wise for the utility industry to take steps to limit use prior to legislative mandates being enacted that would create barriers to practical use.

EMERGING BUSINESSES CREATING VALUE FOR CUSTOMERS

Many of the technologies discussed in this paper already exist; the future will simply make their application more common – the interesting part will come in seeing how these products and services are bundled and who will provide them. Over the next 10 years, many new services (and a few new spins on old ones) will be offered to the consumer via this new infrastructure. The array of service offerings will be as broad as the capabilities that are created through the utilities infrastructure design. Utilities offering only one-way communication from the meter will be limited, while utilities with two-way communication riding their own fiber-optic systems will find a vast number of opportunities. Some of these services will fall within the core competency of the utility and be a natural fit in creating new revenue streams; others will require new partnerships to enable their existence. Some will span residential, commercial and industrial market segments, while others will be tailored to the residential customer only.

Energy management and consulting services will flourish during the initial period, especially in areas where time-of-use rates are incorporated in all market segments. Cable, Internet, telephone and security services will consolidate in areas where fiber-to-the-home is part of the infrastructure. Utilities’ ability to provide these services may be greatly effected by their legal and regulatory structures. Where limitations are imposed related to scope and type of services, partnerships will be formed to enable cost-effective service. Figure 2 shows what utilities reported to be the most common AMI system usages in a recent Federal Energy Regulatory Commission (FERC) survey.

As shown in Figure 2, load control, demand response monitoring and notification of price changes are already a part of the system capabilities. As an awareness of energy efficiency develops, a new focus on conservation will give rise to a newfound interest in smart appliances. Their operational characteristics will be more sophisticated than the predecessors of the “cycle and save” era, and they will meet customers’ demand for energy savings and environmental friendliness. This will not be limited to water heaters and heating, venting and air-conditioning (HVAC) units. The new initiatives will encompass refrigerators, freezers, washers, dryers and other second-order appliances, driving conservation derived from time-of-day use to a new level. And these initiatives will not be limited to electricity.

IMPACTS OF TECHNOLOGICAL CHANGE ON OTHER UTILITIES

Very few utility services will be exempt from the impact of changes in the electric industry. Natural gas and water usage, too, will be impacted as the nation focuses its attention on the efficient use of resources. Natural gas time-of-use rates will rise along with interruptible rates for residential consumers. This may take 10 to 15 years to occur, and a declining usage trend will need to be reversed; however, the same infrastructure restraints and concerns that plague the electric industry will be recognized in the natural gas industry as well. Thus, we can expect energy providers to adopt these rates in the future to stay competitive. If the electric systems are able to shift peak usage and levelize loads, the need for natural gas-fired generation will diminish. Natural gas-fired generation plants for system peaking would become unnecessary, and the decrease in demand would assist in stabilizing natural gas pricing.

Water availability issues are no longer limited to the Western United States, with areas such as Atlanta now beginning to experience water shortages as well. As a result, reverse-step rates that encourage water usage are being replaced with fixed and progressive step-rate structures to encourage water conservation. Automated metering can assist in eliminating waste, identifying excessive use during curtailment periods and creating a more efficient water distribution system. As energy time-of-use rates are implemented, water and wastewater treatment plants may find efficiencies in offering time-of-use rates as well in order to shape the usage characteristics of their customers without adding increased facilities. Even if this does not occur, time-of-use shifting of electrical load will have an impact on water usage patterns and effectively change water and wastewater operational characteristics.

In a world of increasing environmental vulnerability, the ability to monitor backflow in water metering will be essential in our efforts to be environmentally safe and monitor domestic threats to the water supply. Although technology’s ability to identify such threats will not prevent their occurrence, it will help utilities evaluate events and respond in order to isolate and diminish possible future threats.

IMPLICATIONS FOR UTILITIES

The above-described technological innovations don’t come without an impact to the service side of utilities. It will be difficult at best for utilities to modify legacy systems to take advantage of the benefits found in new technologies. More robust computer systems implemented in preparation for Y2K will be capable of some modifications; however, new software offerings are being designed today to address the vast opportunities that will soon exist. Processes for data management, storage and retrieval and use will need to be developed. And a new breed of customer service representative will begin to evolve. New technologies, near realtime information available to the consumer, unique customer and appliance configurations, and partnerships and services that go beyond the core competencies of the current workforce will create a short-term gap in trained customer service professionals. Billing departments will expand as rates become more complex. And the increased flexibility of customer information systems will require extensive checks and verifications to ensure accuracy.

Figure 3 (created by Robert Pratt of Pacific Northwest National Laboratory) provides a picture of the new landscape being created by the technologies utilities are implementing and the implications they have for customers.

Utilities with completely integrated systems will be the biggest winners in the future. Network management; geographic information systems; customer information systems; work order systems; supervisory control and data acquisition (SCADA) systems; and financial systems that communicate openly will be positioned to recognize the early wins that will spark the next decade of innovation. Cost-to-serve models continue to resonate as a popular topic among utility providers, and the impact of new technology will assist in making this integral to financial success.

The processes underlying current policies and procedures were designed for the way utilities traditionally operated – which is precisely why today’s utilities must take a systematic approach to re-evaluating their business processes if they’re to take advantage of new technology. They’ll even need to consider the cost of providing a detailed bill and mail delivery. The existence of real-time readings may bring dramatic changes in payment processing. Prepay accounts may eliminate the need to require deposits or assume risk for uncollectible accounts. Daily, weekly and semi-monthly payments may bring added cost (as may allowing customers to choose their due dates in the traditional arrears billing model); thus, utilities must consider the implications of these actions on cash fl ow and risk before implementing them. Advance notice of service interruption due to planned maintenance or construction can be communicated electronically over two-way automated meter reading (AMR) systems to orbs, communication panels, computers or other means. These same capabilities will dramatically change credit and collections efforts over the next 10 years. Electronic notification of past due accounts, shut-off and reconnection can all be done remotely at little cost to the utility.

IMPLICATIONS FOR CONSUMERS

Customers and commercial marketing efforts will be the driving forces for much of the innovation we’ll witness in coming years. No longer are customers simply comparing utilities against each other; today, they’re comparing utility customer service with their best and worst customer experiences regardless of industry. This means that customers are comparing a utility’s website capabilities with Amazon. com and its service response with the Ritz Carlton, Holiday Inn or Marriott they might frequent. Service reliability is measured against FedEx. Customer service expectations are raised with every initiative of competitive enterprise – a fact utilities will have to come to terms with if they’re to succeed.

All customers are not created equal. Technologically advanced customers will find the future exciting, while customers who view their utility as just another service provider will find it complicated and at times overwhelming. Utilities must communicate with customers at all levels to adequately prepare them for a future that’s already arrived.

Achieving Decentralized Coordination In the Electric Power Industry

For the past century, the dominant business and regulatory paradigms in the electric power industry have been centralized economic and physical control. The ideas presented here and in my forthcoming book, Deregulation, Innovation, and Market Liberalization: Electricity Restructuring in a Constantly Evolving Environment (Routledge, 2008), comprise a different paradigm – decentralized economic and physical coordination – which will be achieved through contracts, transactions, price signals and integrated intertemporal wholesale and retail markets. Digital communication technologies – which are becoming ever more pervasive and affordable – are what make this decentralized coordination possible. In contrast to the “distributed control” concept often invoked by power systems engineers (in which distributed technology is used to enhance centralized control of a system), “decentralized coordination” represents a paradigm in which distributed agents themselves control part of the system, and in aggregate, their actions produce order: emergent order. [1]

Dynamic retail pricing, retail product differentiation and complementary end-use technologies provide the foundation for achieving decentralized coordination in the electric power industry. They bring timely information to consumers and enable them to participate in retail market processes; they also enable retailers to discover and satisfy the heterogeneous preferences of consumers, all of whom have private knowledge that’s unavailable to firms and regulators in the absence of such market processes. Institutions that facilitate this discovery through dynamic pricing and technology are crucial for achieving decentralized coordination. Thus, retail restructuring that allows dynamic pricing and product differentiation, doesn’t stifle the adoption of digital technology and reduces retail entry barriers is necessary if this value-creating decentralized coordination is to happen.

This paper presents a case study – the “GridWise Olympic Peninsula Testbed Demonstration Project” – that illustrates how digital end-use technology and dynamic pricing combine to provide value to residential customers while increasing network reliability and reducing required infrastructure investments through decentralized coordination. The availability (and increasing cost-effectiveness) of digital technologies enabling consumers to monitor and control their energy use and to see transparent price signals has made existing retail rate regulation obsolete. Instead, the policy recommendation that this analysis implies is that regulators should reduce entry barriers in retail markets and allow for dynamic pricing and product differentiation, which are the keys to achieving decentralized coordination.

THE KEYS: DYNAMIC PRICING, DIGITAL TECHNOLOGY

Dynamic pricing provides price signals that reflect variations in the actual costs and benefits of providing electricity at different times of the day. Some of the more sophisticated forms of dynamic pricing harness the dramatic improvements in information technology of the past 20 years to communicate these price signals to consumers. These same technological developments also give consumers a tool for managing their energy use, in either manual or automated form. Currently, with almost all U.S. consumers (even industrial and commercial ones) paying average prices, there’s little incentive for consumers to manage their consumption and shift it away from peak hours. This inelastic demand leads to more capital investment in power plants and transmission and distribution facilities than would occur if consumers could make choices based on their preferences and in the face of dynamic pricing.

Retail price regulation stifles the economic processes that lead to both static and dynamic efficiency. Keeping retail prices fixed truncates the information flow between wholesale and retail markets, and leads to inefficiency, price spikes and price volatility. Fixed retail rates for electric power service mean that the prices individual consumers pay bear little or no relation to the marginal cost of providing power in any given hour. Moreover, because retail prices don’t fluctuate, consumers are given no incentive to change their consumption as the marginal cost of producing electricity changes. This severing of incentives leads to inefficient energy consumption in the short run and also causes inappropriate investment in generation, transmission and distribution capacity in the long run. It has also stifled the implementation of technologies that enable customers to make active consumption decisions, even though communication technologies have become ubiquitous, affordable and user-friendly.

Dynamic pricing can include time-of-use (TOU) rates, which are different prices in blocks over a day (based on expected wholesale prices), or real-time pricing (RTP) in which actual market prices are transmitted to consumers, generally in increments of an hour or less. A TOU rate typically applies predetermined prices to specific time periods by day and by season. RTP differs from TOU mainly because RTP exposes consumers to unexpected variations (positive and negative) due to demand conditions, weather and other factors. In a sense, fixed retail rates and RTP are the end points of a continuum of how much price variability the consumer sees, and different types of TOU systems are points on that continuum. Thus, RTP is but one example of dynamic pricing. Both RTP and TOU provide better price signals to customers than current regulated average prices do. They also enable companies to sell, and customers to purchase, electric power service as a differentiated product.

TECHNOLOGY’S ROLE IN RETAIL CHOICE

Digital technologies are becoming increasingly available to reduce the cost of sending prices to people and their devices. The 2007 Galvin Electricity Initiative report “The Path to Perfect Power: New Technologies Advance Consumer Control” catalogs a variety of end-user technologies (from price-responsive appliances to wireless home automation systems) that can communicate electricity price signals to consumers, retain data on their consumption and be programmed to respond automatically to trigger prices that the consumer chooses based on his or her preferences. [2] Moreover, the two-way communication advanced metering infrastructure (AMI) that enables a retailer and consumer to have that data transparency is also proliferating (albeit slowly) and declining in price.

Dynamic pricing and the digital technology that enables communication of price information are symbiotic. Dynamic pricing in the absence of enabling technology is meaningless. Likewise, technology without economic signals to respond to is extremely limited in its ability to coordinate buyers and sellers in a way that optimizes network quality and resource use. [3] The combination of dynamic pricing and enabling technology changes the value proposition for the consumer from “I flip the switch, and the light comes on” to a more diverse and consumer-focused set of value-added services.

These diverse value-added services empower consumers and enable them to control their electricity choices with more granularity and precision than the environment in which they think solely of the total amount of electricity they consume. Digital metering and end-user devices also decrease transaction costs between buyers and sellers, lowering barriers to exchange and to the formation of particular markets and products.

Whether they take the form of building control systems that enable the consumer to see the amount of power used by each function performed in a building or appliances that can be programmed to behave differently based on changes in the retail price of electricity, these products and services provide customers with an opportunity to make better choices with more precision than ever before. In aggregate, these choices lead to better capacity utilization and better fuel resource utilization, and provide incentives for innovation to meet customers’ needs and capture their imaginations. In this sense, technological innovation and dynamic retail electricity pricing are at the heart of decentralized coordination in the electric power network.

EVIDENCE

Led by the Pacific Northwest National Laboratory (PNNL), the Olympic Peninsula GridWise Testbed Project served as a demonstration project to test a residential network with highly distributed intelligence and market-based dynamic pricing. [4] Washington’s Olympic Peninsula is an area of great scenic beauty, with population centers concentrated on the northern edge. The peninsula’s electricity distribution network is connected to the rest of the network through a single distribution substation. While the peninsula is experiencing economic growth and associated growth in electricity demand, the natural beauty of the area and other environmental concerns served as an impetus for area residents to explore options beyond simply building generation capacity on the peninsula or adding transmission capacity.

Thus, this project tested how the combination of enabling technologies and market-based dynamic pricing affected utilization of existing capacity, deferral of capital investment and the ability of distributed demand-side and supply-side resources to create system reliability. Two questions were of primary interest:

1) What dynamic pricing contracts do consumers find attractive, and how does enabling technology affect that choice?

2) To what extent will consumers choose to automate energy use decisions?

The project – which ran from April 2006 through March 2007 – included 130 broadband-enabled households with electric heating. Each household received a programmable communicating thermostat (PCT) with a visual user interface that allowed the consumer to program the thermostat for the home – specifically to respond to price signals, if desired. Households also received water heaters equipped with a GridFriendly appliance (GFA) controller chip developed at PNNL that enables the water heater to receive price signals and be programmed to respond automatically to those price signals. Consumers could control the sensitivity of the water heater through the PCT settings.

These households also participated in a market field experiment involving dynamic pricing. While they continued to purchase energy from their local utility at a fixed, discounted price, they also received a cash account with a predetermined balance, which was replenished quarterly. The energy use decisions they made would determine their overall bill, which was deducted from their cash account, and they were able to keep any difference as profit. The worst a household could do was a zero balance, so they were no worse off than if they had not participated in the experiment. At any time customers could log in to a secure website to see their current balances and determine the effectiveness of their energy use strategies.

On signing up for the project, the households received extensive information and education about the technologies available to them and the kinds of energy use strategies facilitated by these technologies. They were then asked to choose a retail pricing contract from three options: a fixed price contract (with an embedded price risk premium), a TOU contract with a variable critical peak price (CPP) component that could be called in periods of tight capacity or an RTP contract that would reflect a wholesale market-clearing price in five-minute intervals. The RTP was determined using a uniform price double auction in which buyers (households and commercial) submit bids and sellers submit offers simultaneously. This project represented the first instance in which a double auction retail market design was tested in electric power.

The households ranked the contracts and were then divided fairly evenly among the three types, along with a control group that received the enabling technologies and had their energy use monitored but did not participate in the dynamic pricing market experiment. All households received either their first or second choice; interestingly, more than two-thirds of the households ranked RTP as their first choice. This result counters the received wisdom that residential customers want only reliable service at low, stable prices.

According to the 2007 report on the project by D.J. Hammerstrom (and others), on average participants saved 10 percent on their electricity bills. [5] That report also includes the following findings about the project:

Result 1. For the RTP group, peak consumption decreased by 15 to 17 percent relative to what the peak would have been in the absence of the dynamic pricing – even though their overall energy consumption increased by approximately 4 percent. This flattening of the load duration curve indicates shifting some peak demand to nonpeak hours. Such shifting increases the system’s load factor, improving capacity utilization and reducing the need to invest in additional capacity, for a given level of demand. A 15 to 17 percent reduction is substantial and is similar in magnitude to the reductions seen in other dynamic pricing pilots.

After controlling for price response, weather effects and weekend days, the RTP group’s overall energy consumption was 4 percent higher than that of the fixed price group. This result, in combination with the load duration effect noted above, indicates that the overall effect of RTP dynamic pricing is to smooth consumption over time, not decrease it.

Result 2. The TOU group achieved both a large price elasticity of demand (-0.17), based on hourly data, and an overall energy reduction of approximately 20 percent relative to the fixed price group.

After controlling for price response, weather effects and weekend days, the TOU group’s overall energy consumption was 20 percent lower than that of the fixed price group. This result indicates that the TOU (with occasional critical peaks) pricing induced overall conservation – a result consistent with the results of the California SPP project. The estimated price elasticity of demand in the TOU group was -0.17, which is high relative to that observed in other projects. This elasticity suggests that the pricing coupled with the enabling end-use technology amplifies the price responsiveness of even small residential consumers.

Despite these results, dynamic pricing and enabling technologies are proliferating slowly in the electricity industry. Proliferation requires a combination of formal and informal institutional change to overcome a variety of barriers. And while formal institutional change (primarily in the form of federal legislation) is reducing some of these barriers, it remains an incremental process. The traditional rate structure, fixed by state regulation and slow to change, presents a substantial barrier. Predetermined load profiles inhibit market-based pricing by ignoring individual customer variation and the information that customers can communicate through choices in response to price signals. Furthermore, the persistence of standard offer service at a discounted rate (that is, a rate that does not reflect the financial cost of insurance against price risk) stifles any incentive customers might have to pursue other pricing options.

The most significant – yet also most intangible and difficult-to-overcome – obstacle to dynamic pricing and enabling technologies is inertia. All of the primary stakeholders in the industry – utilities, regulators and customers – harbor status quo bias. Incumbent utilities face incentives to maintain the regulated status quo as much as possible (given the economic, technological and demographic changes surrounding them) – and thus far, they’ve been successful in using the political process to achieve this objective.

Customer inertia also runs deep because consumers have not had to think about their consumption of electricity or the price they pay for it – a bias consumer advocates generally reinforce by arguing that low, stable prices for highly reliable power are an entitlement. Regulators and customers value the stability and predictability that have arisen from this vertically integrated, historically supply-oriented and reliability-focused environment; however, what is unseen and unaccounted for is the opportunity cost of such predictability – the foregone value creation in innovative services, empowerment of customers to manage their own energy use and use of double-sided markets to enhance market efficiency and network reliability. Compare this unseen potential with the value creation in telecommunications, where even young adults can understand and adapt to cell phone-pricing plans and benefit from the stream of innovations in the industry.

CONCLUSION

The potential for a highly distributed, decentralized network of devices automated to respond to price signals creates new policy and research questions. Do individuals automate sending prices to devices? If so, do they adjust settings, and how? Does the combination of price effects and innovation increase total surplus, including consumer surplus? In aggregate, do these distributed actions create emergent order in the form of system reliability?

Answering these questions requires thinking about the diffuse and private nature of the knowledge embedded in the network, and the extent to which such a network becomes a complex adaptive system. Technology helps determine whether decentralized coordination and emergent order are possible; the dramatic transformation of digital technology in the past few decades has decreased transaction costs and increased the extent of feasible decentralized coordination in this industry. Institutions – which structure and shape the contexts in which such processes occur – provide a means for creating this coordination. And finally, regulatory institutions affect whether or not this coordination can occur.

For this reason, effective regulation should focus not on allocation but rather on decentralized coordination and how to bring it about. This in turn means a focus on market processes, which are adaptive institutions that evolve along with technological change. Regulatory institutions should also be adaptive, and policymakers should view regulatory policy as work in progress so that the institutions can adapt to unknown and changing conditions and enable decentralized coordination.

ENDNOTES

1. Order can take many forms in a complex system like electricity – for example, keeping the lights on (short-term reliability), achieving economic efficiency, optimizing transmission congestion, longer-term resource adequacy and so on.

2. Roger W. Gale, Jean-Louis Poirier, Lynne Kiesling and David Bodde, “The Path to Perfect Power: New Technologies Advance Consumer Control,” Galvin Electricity Initiative report (2007). www.galvinpower.org/resources/galvin.php?id=88

3. The exception to this claim is the TOU contract, where the rate structure is known in advance. However, even on such a simple dynamic pricing contract, devices that allow customers to see their consumption and expenditure in real time instead of waiting for their bill can change behavior.

4. D.J. Hammerstrom et. al, “Pacific Northwest GridWise Testbed Demonstration Projects, volume I: The Olympic Peninsula Project” (2007). http://gridwise.pnl.gov/docs/op_project_final_report_pnnl17167.pdf

5. Ibid.