Helping North American Utilities Transform the Way They Do Business

Utilities are facing a host of challenges ranging from environmental concerns, aging infrastructure and systems, to Smart Grid technology and related program decisions. The future utility will be required to find effective solutions to these challenges, while continuing to meet the increasing expectations of newly empowered consumers. This brings an opportunity to create stronger, more profitable relationships with customers, and to do so more cost effectively.

Since our formation in 1996 as the subsidiary of UK-based United Utilities Plc., Vertex Business Services has grown to serve over 70 North American utilities and retail energy clients, who in turn serve over 23 million end-use customers. Our broad portfolio of Business Process Outsourcing (BPO) and Information Technology (IT) solutions enables our clients to more effectively manage operational costs, improve efficiencies, develop front-line employees, and achieve superior customer experience.

Improving Utility Collection Performances

Utilities can greatly benefit from the debt management practices and experience of industries such as banking and retail that have developed a more sophisticated skill set. Benefits can come from adoption of proven methodologies for managing accounts receivable and managing outsourced agency collections business processes, as well as from the use of appropriate software for these processes. There is also benefit to using analytical tools to evaluate the process of collections and optimizing processes based on metrics collected.

Improve your collection rates and lower outstanding accounts receivable through Vertex’s proven collection services. Our rich heritage results in our ability to implement best practices and provide quality reporting strategies, ironclad credit and collection processes, and innovative training programs.

Handling Demand Response and Efficiency In the Call Center

In the next five to 10 years, utilities will be forced to change more than at any time in their previous history. These changes will be profound, widespread and will affect not only utilities themselves, but virtually all parts of our modern electrified culture. One of the most dramatic changes will be in the traditional relationship between utilities and their customers, especially at the residential level. Passive electricity "rate payers" are about to become very active participants in the relationship with their utility.

Modeling Distribution Demand Reduction

In the past, distribution demand reduction was a technique used only in emergency situations a few times a year – if that. It was an all-or-nothing capability that you turned on, and hoped for the best until the emergency was over. Few utilities could measure the effectiveness, let alone the potential of any solutions that were devised.

Now, demand reduction is evolving to better support the distribution network during typical peaking events, rather than just emergencies. However, in this mode, it is important not only to understand the solution’s effectiveness, but to be able to treat it like any other dispatchable load-shaping resource. Advanced modeling techniques and capabilities are allowing utilities to do just that. This paper outlines various methods and tools that allow utilities to model distribution demand reduction capabilities within set time periods, or even in near real time.

Electricity demand continues to outpace the ability to build new generation and apply the necessary infrastructure needed to meet the ever-growing, demand-side increases dictated by population growth and smart residences across the globe. In most parts of the world, electrical energy is one of the most important characteristics of a modern civilization. It helps produce our food, keeps us comfortable, and provides lighting, security, information and entertainment. In short, it is a part of almost every facet of life, and without electrical energy, the modern interconnected world as we know it would cease to exist.

Every country has one or more initiatives underway, or in planning, to deal with some aspect of generation and storage, delivery or consumption issues. Additionally, greenhouse gases (GHG) and carbon emissions need to be tightly controlled and monitored. This must be carefully balanced with expectations from financial markets that utilities deliver balanced and secure investment portfolios by demonstrating fiduciary responsibility to sustain revenue projections and measured growth.

The architects of today’s electric grid probably never envisioned the day when electric utility organizations would purposefully take measures to reduce the load on the network, deal with highly variable localized generation and reverse power flows, or anticipate a regulatory climate that impacts the decisions for these measures. They designed the electric transmission and distribution systems to be robust, flexible and resilient.

When first conceived, the electric grid was far from stable and resilient. It took growth, prudence and planning to continue the expansion of the electric distribution system. This grid was made up of a limited number of real power and reactive power devices that responded to occasional changes in power flow and demand. However, it was also designed in a world with far fewer people, with a virtually unlimited source of power, and without much concern or knowledge of the environmental effects that energy production and consumption entail.

To effectively mitigate these complex issues, a new type of electric utility business model must be considered. It must rapidly adapt to ever-changing demands in terms of generation, consumption, environmental and societal benefits. A grid made up of many intelligent and active devices that can manage consumption from both the consumer and utility side of the meter must be developed. This new business model will utilize demand management as a key element to the operation of the utility, while at the same time driving the consumer spending behavior.

To that end, a holistic model is needed that understands all aspects of the energy value chain across generation, delivery and consumption, and can optimize the solution in real time. While a unifying model may still be a number of years away, a lot can be gained today from modeling and visualizing the distribution network to gauge the effect that demand reduction can – and does – play in near real time. To that end, the following solutions are surely well considered.

Advanced Feeder Modeling

First, a utility needs to understand in more detail how its distribution network behaves. When distribution networks were conceived, they were designed primarily with sources (the head of the feeder and substation) and sinks (the consumers or load) spread out along the distribution network. Power flows were assumed to be one direction only, and the feeders were modeled for the largest peak level.

Voltage and volt-ampere reactive power (VAR) management were generally considered for loss optimization and not load reduction. There was never any thought given to limiting power to segments of the network or distributed storage or generation, all of which could dramatically affect the flow of the network, even causing reverse flows at times. Sensors to measure voltage and current were applied at the head of the feeder and at a few critical points (mostly in historical problem areas.)

Planning feeders at most utilities is an exercise performed when large changes are anticipated (i.e., a new subdivision or major customer) or on a periodic basis, usually every three to five years. Loads were traditionally well understood with predictable variability, so this type of approach worked reasonably well. The utility also was in control of all generation sources on the network (i.e., peakers), and when there was a need for demand reduction, it was controlled by the utility, usually only during critical periods.

Today’s feeders are much more complex, and are being significantly influenced by both generation and demand from entities outside the control of the utility. Even within the utility, various seemingly disparate groups will, at times, attempt to alter power flows along the network. The simple model of worst-case peaking on a feeder is not sufficient to understand the modern distribution network.

The following factors must be considered in the planning model:

  • Various demand-reduction techniques, when and where they are applied and the potential load they may affect;
  • Use of voltage reduction as a load-shedding technique, and where it will most likely yield significant results (i.e., resistive load);
  • Location, size and capacity of storage;
  • Location, size and type of renewable generation systems;
  • Use and location of plug-in electrical vehicles;
  • Standby generation that can be fed into the network;
  • Various social ecosystems and their characteristics to influence load; and
  • Location and types of sensors available.

Generally, feeders are modeled as a single unit with their power characteristic derived from the maximum peaking load and connected kilovolt-amperage (KVA) of downstream transformers. A more advanced model treats the feeder as a series of connected segments. The segment definitions can be arbitrary, but are generally chosen where the utility will want to understand and potentially control these segments differently than others. This may be influenced by voltage regulation, load curtailment, stability issues, distributed generation sources, storage, or other unique characteristics that differ from one segment to the next.

The following serves as an advanced means to model the electrical distribution feeder networks. It provides for segmentation and sensor placement in the absence of a complete network and historical usage model. The modeling combines traditional electrical engineering and power-flow modeling with tools such as CYME and non-traditional approaches using geospatial and statistical analysis.

The model builds upon information such as usage data, network diagrams, device characteristics and existing sensors. It then adds elements that could present a discrepancy with the known model such as social behavior, demand-side programs, and future grid operations based on both spatio-temporal and statistical modeling. Finally, suggestions can be made about sensors’ placement and characteristics to the network to support system monitoring once in place.

Generally, a utility would take a more simplistic view of the problem. It would start by directly applying statistical analysis and stochastic modeling across the grid to develop a generic methodology for selecting the number of sensors, and where to place them based on sensor accuracy, cost and risk-of-error introduction from basic modeling assumptions (load allocation, timing of peak demand, and other influences on error.) However, doing so would limit the utility, dealing only with the data it has in an environment that will be changing dramatically.

The recommended and preferred approach performs some analysis to determine what the potential error sources are, which source is material to the sensor question, and which could influence the system’s power flows. Next, an attempt can be made to geographically characterize where on the grid these influences are most significant. Then, a statistical approach can be applied to develop a model for setting the number, type and location of additional sensors. Lastly sensor density and placement can be addressed.

Feeder Modeling Technique

Feeder conditioning is important to minimize the losses, especially when the utility wants to moderate voltage levels as a load modification method. Without proper feeder conditioning and sufficient sensors to monitor the network, the utility is at risk of either violating regulatory voltage levels, or potentially limiting its ability to reduce the optimal load amount from the system during voltage reduction operations.

Traditionally, feeder modeling is a planning activity that is done at periodic (for example, yearly) intervals or during an expected change in usage. Tools such as CYME – CYMDIST provide feeder analysis using:

  • Balanced and unbalanced voltage drop analysis (radial, looped or meshed);
  • Optimal capacitor placement and sizing to minimize losses and/or improve voltage profile;
  • Load balancing to minimize losses;
  • Load allocation/estimation using customer consumption data (kWh), distribution transformer size (connected kVA), real consumption (kVA or kW) or the REA method. The algorithm treats multiple metering units as fixed demands; and large metered customers as fixed load;
  • Flexible load models for uniformly distributed loads and spot loads featuring independent load mix for each section of circuit;
  • Load growth studies for multiple years; and
  • Distributed generation.

However, in many cases, much of the information required to run an accurate model is not available. This is either because the data does not exist, the feeder usage paradigm may be changing, the sampling period does not represent a true usage of the network, the network usage may undergo significant changes, or other non-electrical characteristics.

This represents a bit of a chicken-or-egg problem. A utility needs to condition its feeders to change the operational paradigm, but it also needs operational information to make decisions on where and how to change the network. The solution is a combination of using existing known usage and network data, and combining it with other forms of modeling and approximation to build the best future network model possible.

Therefore, this exercise refines traditional modeling with three additional techniques: geospatial analysis; statistical modeling; and sensor selection and placement for accuracy.

If a distribution management system (DMS) will be deployed, or is being considered, its modeling capability may be used as an additional basis and refinement employing simulated and derived data from the above techniques. Lastly, if high accuracy is required and time allows, a limited number of feeder segments can be deployed and monitored to validate the various modeling theories prior to full deployment.

The overall goals for using this type of technique are:

  • Limit customer over or under voltage;
  • Maximize returned megawatts in the system in load reduction modes;
  • Optimize the effectiveness of the DMS and its models;
  • Minimize cost of additional sensors to only areas that will return the most value;
  • Develop automated operational scenarios, test and validation prior to system-wide implementation; and
  • Provide a foundation for additional network automation capabilities.

The first step starts by setting up a short period of time to thoroughly vet possible influences on the number, spacing and value offered by additional sensors on the distribution grid. This involves understanding and obtaining information that will most influence the model, and therefore, the use of sensors. Information could include historical load data, distribution network characteristics, transformer name plate loading, customer survey data, weather data and other related information.

The second step is the application of geospatial analysis to identify areas of the grid most likely to have influences driving a need for additional sensors. It is important to recognize that within this step is a need to correlate those influential geospatial parameters with load profiles of various residential and commercial customer types. This step represents an improvement over simply applying the same statistical analysis generically over the entirety of the grid, allowing for two or more “grades” of feeder segment characteristics for which different sensor standards would be developed.

The third step is the statistical analysis and stochastic modeling to develop recommended standards and methodology for determining sensor placement based on the characteristic segments developed from the geospatial assessment. Items set aside as not material for sensor placement serve as a necessary input to the coming “predictive model” exercise.

Lastly, a traditional electrical and accuracy- based analysis is used to model the exact number and placement of additional sensors to support the derived models and planned usage of the system for all scenarios depicted in the model – not just summertime peaking.

Conclusion

The modern distribution network built for the smart grid will need to undergo significantly more detailed planning and modeling than a traditional network. No one tool is suited to the task, and it will take multiple disciplines and techniques to derive the most benefit from the modeling exercise. However, if a utility embraces the techniques described within this paper, it will not only have a better understanding of how its networks perform in various smart grid scenarios, but it will be better positioned to fully optimize its networks for load and loss optimization.

Silver Spring Networks

When engineers built the national electric grid, their achievement made every other innovation built on or run by electricity possible – from the car and airplane to the radio, television, computer and the Internet. Over decades, all of these inventions have gotten better, smarter and cheaper while the grid has remained exactly the same. As a result, our electrical grid is operating under tremendous stress. The Department of Energy estimates that by 2030, demand for power will outpace supply by 30 percent. And this increasing demand for low-cost, reliable power must be met alongside growing environmental concerns.

Silver Spring Networks (SSN) is the first proven technology to enable the smart grid. SSN is a complete smart grid solutions company that enables utilities to achieve operational efficiencies, reduce carbon emissions and offer their customers new ways to monitor and manage their energy consumption. SSN provides hardware, software and services that allow utilities to deploy and run unlimited advanced applications, including smart metering, demand response, distribution automation and distributed generation, over a single, unified network.

The smart grid should operate like the Internet for energy, without proprietary networks built around a single application or device. In the same way that one can plug any laptop or device into the Internet, regardless of its manufacturer, utilities should be able to “plug in” any application or consumer device to the smart grid. SSN’s Smart Energy Network is based on open, Internet Protocol (IP) standards, allowing for continuous, two-way communication between the utility and every device on the grid – now and in the future.

The IP networking standard adopted by Federal agencies has proven secure and reliable over decades of use in the information technology and finance industries. This network provides a high-bandwidth, low-latency and cost-effective solution for utility companies.

SSN’s Infrastructure Cards (NICs) are installed in “smart” devices, like smart meters at the consumer’s home, allowing them to communicate with SSN’s access points. Each access point communicates with networked devices over a radius of one or two miles, creating a wireless communication mesh that connects every device on the grid to one another and to the utility’s back office.

Using the Smart Energy Network, utilities will be able to remotely connect or disconnect service, send pricing information to customers who can understand how much their energy is costing in real time, and manage the integration of intermittent renewable energy sources like solar panels, plug-in electric vehicles and wind farms.

In addition to providing The Smart Energy Network and the software/firmware that makes it run smoothly, SSN develops applications like outage detection and restoration, and provides support services to their utility customers. By minimizing or eliminating interruptions, the self-healing grid could save industrial and residential consumers over $100 billion per year.

Founded in 2002 and headquartered in Redwood City, Ca., SSN is a privately held company backed by Foundation Capital, Kleiner Perkins Caufield & Byers and Northgate Capital. The company has over 200 employees and a global reach, with partnerships in Australia, the U.K. and Brazil.

SSN is the leading smart grid solutions provider, with successful deployments with utilities serving 20 percent of the U.S. population, including Florida Power & Light (FPL), Pacific Gas & Electric (PG&E), Oklahoma Gas & Electric (OG&E) and Pepco Holdings, Inc. (PHI), among others.

FPL is one of the largest electric utilities in the U.S., serving approximately 4.5 million customers across Florida. In 2007, SSN and FPL partnered to deploy SSN’s Smart Energy Network to 100,000 FPL customers. It began with rigorous environmental and reliability testing to ensure that SSN’s technology would hold up under the harsh environmental conditions in some areas of Florida. Few companies are able to sustain the scale and quality of testing that FPL required during this deployment, including power outage notification testing, exposure to water and salt spray and network throughput performance test for self-healing failover characteristics.

SSN’s solution has met or exceeded all FPL acceptance criteria. FPL plans to continue deployment of SSN’s Smart Energy Network at a rate of one million networked meters per year beginning in 2010 to all 4.5 million residential customers.

PG&E is currently rolling out SSN’s Smart Energy Network to all 5 million electric customers over a 700,000 square-mile service area.

OG&E, a utility serving 770,000 customers in Oklahoma and western Arkansas, worked with SSN to deploy a small-scale pilot project to test The Smart Energy Network and gauge customer satisfaction. The utility deployed SSN’s network, along with an energy management web-based portal in 25 homes in northwest Oklahoma City. Another 6,600 apartments were given networked meters to allow remote initiation and termination of service.

Consumer response to the project was overwhelmingly positive. Participating residents said they gained flexibility and control over their household’s energy consumption by monitoring their usage on in-home touch screen information panels. According to one customer, “It’s the three A’s: awareness, attitude and action. It increased our awareness. It changed our attitude about when we should be using electricity. It made us take action.”

Based on the results, OG&E presented a plan for expanded deployment to the Oklahoma Corporation Commission for their consideration.

PHI recently announced its partnership with SSN to deliver The Smart Energy Network to its 1.9 million customers across Washington, D.C., Delaware, Maryland and New Jersey. The first phase of the smart grid deployment will begin in Delaware in March 2009 and involve SSN’s advanced metering and distribution automation technology. Additional deployment will depend on regulatory authorization.

The impact of energy efficiency is enormous. More aggressive energy efficiency efforts could cut the growth rate of worldwide energy consumption by more than half over the next 15 years, according to the McKinsey Global Institute. The Brattle Group states that demand response could reduce peak load in the U.S. by at least 5 percent over the next few years, saving over $3 billion per year in electricity costs. The discounted present value of these savings would be $35 billion over the next 20 years in the U.S. alone, with significantly greater savings worldwide.

Governments throughout the EU, Canada and Australia are now mandating implementation of alternate energy and grid efficiency network programs. The Smart Energy Network is the technology platform that makes energy efficiency and the smart grid possible. And, it is working in the field today.

Managing Communications Change

Change is being forced upon the utilities industry. Business drivers range from stakeholder pressure for greater efficiency to the changing technologies involved in operational energy networks. New technologies such as intelligent networks or smart grids, distribution automation or smart metering are being considered.

The communications network is becoming the key enabler for the evolution of reliable energy supply. However, few utilities today have a communications network that is robust enough to handle and support the exacting demands that energy delivery is now making.

It is this process of change – including the renewal of the communications network – that is vital for each utility’s future. But for the utility, this is a technological step change requiring different strategies and designs. It also requires new skills, all of which have been implemented in timescales that do not sit comfortably with traditional technology strategies.

The problems facing today’s utility include understanding the new technologies and assessing their capabilities and applications. In addition, the utility has to develop an appropriate strategy to migrate legacy technologies and integrate them with the new infrastructure in a seamless, efficient, safe and reliable manner.

This paper highlights the benefits utilities can realize by adopting a new approach to their customers’ needs and engaging a network partner that will take responsibility for the network upgrade, its renewal and evolution, and the service transition.

The Move to Smart Grids

The intent of smart grids is to provide better efficiency in the production, transport and delivery of energy. This is realized in two ways:

  • Better real-time control: ability to remotely monitor and measure energy flows more closely, and then manage those flows and the assets carrying them in real time.
  • Better predictive management: ability to monitor the condition of the different elements of the network, predict failure and direct maintenance. The focus is on being proactive to real needs prior to a potential incident, rather than being reactive to incidents, or performing maintenance on a repetitive basis whether it is needed or not.

These mechanisms imply more measurement points, remote monitoring and management capabilities than exist today. And this requires a greater reliance on reliable, robust, highly available communications than has ever been the case before.

The communications network must continue to support operational services independently of external events, such as power outages or public service provider failure, yet be economical and simple to maintain. Unfortunately, the majority of today’s utility communications implementations fall far short of these stringent requirements.

Changing Environment

The design template for the majority of today’s energy infrastructure was developed in the 1950s and 1960s – and the same is true of the associated communications networks.

Typically, these communications networks have evolved into a series of overlays, often of different technology types and generations (see Figure 1). For example, protection tends to use its own dedicated network. The physical realization varies widely, from tones over copper via dedicated time division multiplexing (TDM) connections to dedicated fiber connections. These generally use a mix of privately owned and leased services.

Supervisory control and data acquisitions systems (SCADA) generally still use modem technology at speeds between 300 baud to 9.6k baud. Again, the infrastructure is often copper or TDM running as one of many separate overlay networks.

Lastly, operational voice services (as opposed to business voice services) are frequently analog on yet another separate network.

Historically, there were good operational reasons for these overlays. But changes in device technology (for example, the evolution toward e-SCADA based on IP protocols), as well as the decreasing support by communications equipment vendors of legacy communications technologies, means that the strategy for these networks has to be reassessed. In addition, the increasing demand for further operational applications (for example, condition monitoring, or CCTV, both to support substation automation) requires a more up-to-date networking approach.

Tomorrow’s Network

With the exception of protection services, communications between network devices and the network control centers are evolving toward IP-based networks (see Figure 2). The benefits of this simplified infrastructure are significant and can be measured in terms of asset utilization, reduced capital and operational costs, ease of operation, and the flexibility to adapt to new applications. Consequently, utilities will find themselves forced to seriously consider the shift to a modern, homogeneous communications infrastructure to support their critical operational services.

Organizing For Change

As noted above, there are many cogent reasons to transform utility communications to a modern, robust communications infrastructure in support of operational safety, reliability and efficiency. However, some significant considerations should be addressed to achieve this transformation:

Network Strategy. It is almost inevitable that a new infrastructure will cross traditional operational and departmental boundaries within the utility. Each operational department will have its own priorities and requirements for such a network, and traditionally, each wants some, or total, control. However, to achieve real benefits, a greater degree of centralized strategy and management is required.

Architecture and Design. The new network will require careful engineering to ensure that it meets the performance-critical requirements of energy operations. It must maintain or enhance the safety and reliability of the energy network, as well as support the traffic requirements of other departments.

Planning, Execution and Migration. Planning and implementation of the core infrastructure is just the start of the process. Each service requires its own migration plan and has its own migration priorities. Each element requires specialist technical knowledge, and for preference, practical field experience.

Operation. Gone are the days when a communications failure was rectified by sending an engineer into the field to find the fault and to fix it. Maintaining network availability and robustness calls for sound operational processes and excellent diagnostics before any engineer or technician hits the road. The same level of robust centralized management tools and processes that support the energy networks have to be put in place to support communications network – no matter what technologies are used in the field.

Support. Although these technologies are well understood by the telecommunications industry, they are likely to be new to the energy utilities industry. This means that a solid support organization familiar with these technologies must be implemented. The evolution process requires an intense level of up-front skills and resources. Often these are not readily available in-house – certainly not in the volume required to make any network renewal or transformation effective. Building up this skill and resource base by recruitment will not necessarily yield staff that is aware of the peculiarities of the energy utilities market. As a result, there will be significant time lag from concept to execution, and considerable risk for the utility as it ventures alone into unknown territory.

Keys To Successful Engagement

Engaging a services partner does not mean ceding control through a rigid contract. Rather, it means crafting a flexible relationship that takes into consideration three factors: What is the desired outcome of the activity? What is the best balance of scope between partner assistance and in-house performance to achieve that outcome? How do you retain the flexibility to accommodate change while retaining control?

Desired outcome is probably the most critical element and must be well understood at the outset. For one utility, the desired outcome may be to rapidly enable the upgrade of the complete energy infrastructure without having to incur the upfront investment in a mass recruitment of the required new communications skills.

For other utilities, the desired outcome may be different. But if the outcomes include elements of time pressure, new skills and resources, and/or network transformation, then engaging a services partner should be seriously considered as one of the strategic options.

Second, not all activities have to be in scope. The objective of the exercise might be to supplement existing in-house capabilities with external expertise. Or, it might be to launch the activity while building up appropriate in-house resources in a measured fashion through the Build-Operate- Transfer (BOT) approach.

In looking for a suitable partner, the utility seeks to leverage not only the partner’s existing skills, but also its experience and lessons learned performing the same services for other utilities. Having a few bruises is not a bad thing – this means that the partner understands what is at stake and the range of potential pitfalls it may encounter.

Lastly, retaining flexibility and control is a function of the contract between the two parties which should be addressed in their earliest discussions. The idea is to put in place the necessary management framework and a robust change control mechanism based on a discussion between equals from both organizations. The utility will then find that it not only retains full control of the project without having to take day-to-day responsibility for its management, but also that it can respond to change drivers from a variety of sources – such as technology advances, business drivers, regulators and stakeholders.

Realizing the Benefits

Outsourcing or partnering the communications transformation will yield benefits, both tangible and intangible. It must be remembered that there is no standard “one-size-fits-all” outsourcing product. Thus, the benefits accrued will depend on the details of the engagement.

There are distinct tangible benefits that can be realized, including:

Skills and Resources. A unique benefit of outsourcing is that it eliminates the need to recruit skills not available internally. These are provided by the partner on an as-needed basis. The additional advantage for the utility is that it does not have to bear the fixed costs once they are no longer required.

Offset Risks. Because the partner is responsible for delivery, the utility is able to mitigate risk. For example, traditionally vendors are not motivated to do anything other than deliver boxes on time. But with a well-structured partnership, there is an incentive to ensure that the strategy and design are optimized to economically deliver the required services and ease of operation. Through an appropriate regime of business-related key performance indicators (KPIs), there is a strong financial incentive for the partner to operate and upgrade the network to maintain peak performance – something that does not exist when an in-house organization is used.

Economies of Scale. Outsourcing can bring the economies of scale resulting from synergies together with other parts of the partner’s business, such as contracts and internal projects.

There also are many other benefits associated with outsourcing that are not as immediately obvious and commercially quantifiable as those listed above, but can be equally valuable.

Some of these less tangible benefits include:

Fresh Point of View. Within most companies, employees often have a vested interest in maintaining the status quo. But a managed services organization has a vested interest in delivering the best possible service to the customer – a paradigm shift in attitude that enables dramatic improvements in performance and creativity.

Drive to Achieve Optimum Efficiency. Executives, freed from the day-to-day business of running the network, can focus on their core activities, concentrating on service excellence rather than complex technology decisions. To quote one customer, “From my perspective, a large amount of my time that might have in the past been dedicated to networking issues is now focused on more strategic initiatives concerned with running my business more effectively.”

Processes and Technologies Optimization. Optimizing processes and technologies to improve contract performance is part of the managed services package and can yield substantial savings.

Synergies with Existing Activities Create Economies of Scale. A utility and a managed services vendor have considerable overlap in the functions performed within their communications engineering, operations and maintenance activities. For example, a multi-skilled field force can install and maintain communications equipment belonging to a variety of customers. This not only provides cost savings from synergies with the equivalent customer activity, but also an improved fault response due to the higher density of deployed staff.

Access to Global Best Practices. An outsourcing contract relieves a utility of the time-consuming and difficult responsibility of keeping up to speed with the latest thinking and developments in technology. Alcatel-Lucent, for example, invests around 14 percent of its annual revenue into research and development; its customers don’t have to.

What Can Be Outsourced?

There is no one outsourcing solution that fits all utilities. The final scope of any project will be entirely dependent on a utility’s specific vision and current circumstances.

The following list briefly describes some of the functions and activities that are good possibilities for outsourcing:

Communications Strategy Consulting. Before making technology choices, the energy utility needs to define the operational strategy of the communications network. Too often communications is viewed as “plug and play,” which is hardly ever the case. A well-thought-out communications strategy will deliver this kind of seamless operation. But without that initial strategy, the utility risks repeating past mistakes and acquiring an ad-hoc network that will rapidly become a legacy infrastructure, which will, in turn, need replacing.

Design. Outsourcing allows utilities to evolve their communications infrastructure without upfront investment in incremental resources and skills. It can delegate responsibility for defining network architecture and the associated network support systems. A utility may elect to leave all technological decisions to the vendor and merely review progress and outcomes. Or, it may retain responsibility for technology strategy, and turn to the managed services vendor to turn the strategy into architecture and manage the subsequent design and project activities.

Build. Detailed planning of the network, the rollout project and the delivery of turnkey implementations all fall within the scope of the outsourcing process.

Operate, Administer and Maintain. Includes network operations and field and support services:

  • Network Operations. A vendor such as Alcatel-Lucent has the necessary experience in operating Network Operations Centers (NOCs), both on a BOT and ongoing basis. This includes handling all associated tasks such as performance and fault monitoring, and services management.
  • Network and Customer Field Services. Today, few energy utilities consider outside maintenance and provisioning activities to be a strategic part of their business and recognize they are prime candidates for outsourcing. Activities that can be outsourced include corrective and preventive maintenance, network and service provisioning, and spare parts management, return and repair – in other words, all the daily, time-consuming, but vitally important elements for running a reliable network.
  • Network Support Services. Behind the first-line activities of the NOC are a set of engineering support functions that assist with more complex faults – these are functions that cannot be automated and tend to duplicate those of the vendor’s. The integration and sharing of these functions enabled by outsourcing can significantly improve the utility’s efficiency.

Conclusion

Outsourcing can deliver significant benefits to a utility, both in terms of its ability to invest in and improve its operation and associated costs. However, each utility has its own unique circumstances, specific immediate needs, and vision of where it is going. Therefore, each technical and operational solution is different.

Alcatel-Lucent Your Smart Grid Partner

Alcatel-Lucent offers comprehensive capabilities that combine Utility industry – specific knowledge and experience with carrier – grade communications technology and expertise. Our IP/MPLS Transformation capabilities and Utility market – specific knowledge are the foundation of turnkey solutions designed to enable Smart Grid and Smart Metering initiatives. In addition, Alcatel-Lucent has specifically developed Smart Grid and Smart Metering applications and solutions that:

  • Improve the availability, reliability and resiliency of critical voice and data communications even during outages
  • Enable optimal use of network and grid devices by setting priorities for communications traffic according to business requirements
  • Meet NERC CIP compliance and cybersecurity requirements
  • Improve the physical security and access control mechanism for substations, generation facilities and other critical sites
  • Offer a flexible and scalable network to grow with the demands and bandwidth requirements of new network service applications
  • Provide secure web access for customers to view account, electricity usage and billing information
  • Improve customer service and experience by integrating billing and account information with IP-based, multi-channel client service platforms
  • Reduce carbon emissions and increase efficiency by lowering communications infrastructure power consumption by as much as 58 percent

Working with Alcatel-Lucent enables Energy and Utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for, and minimizing the risks associated with, moving to Smart Grid solutions. And Alcatel-Lucent helps Energy and Utility companies achieve compliance with regulatory requirements and reductions in operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services. We build Smart Networks to support the Smart Grid.

American Recovery and Reinvestment Act of 2009 Support from Alcatel-Lucent

The American Recovery and Reinvestment Act (ARRA) of 2009 was adopted by Congress in February 2009 and allocates $4.5 billion to the Department of Energy (DoE) for Smart Grid deployment initiatives. As a result of the ARRA, the DoE has established a process for awarding the $4.5 billion via investment grants for Smart Grid Research and Development, and Deployment projects. Alcatel-Lucent is uniquely qualified to help utilities take advantage of the ARRA Smart Grid funding. In addition to world-class technology and Smart Grid and Smart Metering solutions, Alcatel-Lucent offers turnkey assistance in the preparation of grant applications, and subsequent follow-up and advocacy with federal agencies. Partnership with Alcatel-Lucent on ARRA includes:

  • Design Implementation and support for a Smart Grid Network
  • Identification of all standardized and unique elements of each grant program
  • Preparation and Compilation of all required grant application components, such as project narratives, budget formation, market surveys, mapping, and all other documentation required for completion
  • Advocacy at federal, state, and local government levels to firmly establish the value proposition of a proposal and advance it through the entire process to ensure the maximum opportunity for success

Alcatel-Lucent is a Recognized Leader in the Energy and Utilities Market

Alcatel-Lucent is an active and involved leader in the Energy and Utility market, with active membership and leadership roles in key Utility industry associations, including the Utility Telecom Council (UTC), the American Public Power Association (APPA), and Gridwise. Gridwise is an association of Utilities, industry research organizations (e.g., EPRI, Pacific Northwest National Labs, etc.), and Utility vendors, working in cooperation with DOE to promote Smart Grid policy, regulatory issues, and technologies (see www.gridwise.org for more info). Alcatel-Lucent is also represented on the Board of Directors for UTC’s Smart Network Council, which was established in 2008 to promote and develop Smart Grid policies, guidelines, and recommended technologies and strategies for Smart Grid solution implementation.

Alcatel-Lucent IP MPLS Solution for the Next Generation Utility Network

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain flawless service delivery. The Alcatel-Lucent IP/MPLS solution can enable the utility operator to extend and enhance its network with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both CAPEX and OPEX without jeopardizing reliability. Advanced technologies also allow the introduction of new Smart Grid applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation utility networks.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. As a leader in utility, enterprise and carrier IP technologies, fixed, mobile and converged broadband access, applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry, and Bell Labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

Empowering the Smart Grid

Trilliant is the leader in delivering intelligent networks that power the smart grid. Trilliant provides hardware, software and service solutions that deliver on the promise of Advanced Metering and Smart Grid to utilities and their customers, including improved energy efficiency, grid reliability, lower operating cost, and integration of renewable energy resources.

Since its founding in 1985, the company has been a leading innovator in the delivery and implementation of advanced metering infrastructure (AMI), demand response and grid management solutions, in addition to installation, program management and meter revenue cycle services. Trilliant is focused on enabling choice for utility companies, ranging from meter, network and IT infrastructures to full or hybrid outsource models.

Solutions

Trilliant provides fully automated, two-way wireless network solutions and software for smart grid applications. The company’s smart grid communications solutions enable utilities to create a more efficient and robust operational infrastructure to:

  • Read meters on demand with five minute or less intervals;
  • Improve cash flow;
  • Improve customer service;
  • Decrease issue resolution time;
  • Verify outages and restoration in real time;
  • Monitor substation equipment;
  • Perform on/off cycle reads;
  • Conduct remote connect/disconnect;
  • Significantly reduce/eliminate energy theft through tamper detection; and
  • Realize accounting/billing improvements.

Trilliant solutions also enable the introduction of services and programs such as:

  • Dynamic demand response; and
  • Time-of-use (TOU), critical peak pricing (CPP) and other special tariffs and related metering.

Solid Customer Base

Trilliant has secured contracts for more than three million meters to be supported by its network solutions and services, encompassing both C&I and residential applications. The company has delivered products and services to more than 200 utility customers, including Duke Energy, E.ON US (Louisville Gas & Electric), Hydro One, Hydro Quebec, Jamaica Public Service Company Ltd., Milton Hydro, Northeast Utilities, PowerStream, Public Service Gas & Electric, San Diego Gas & Electric, Toronto Hydro Electric System Ltd., and Union Gas, among others.

Achieving Decentralized Coordination In the Electric Power Industry

For the past century, the dominant business and regulatory paradigms in the electric power industry have been centralized economic and physical control. The ideas presented here and in my forthcoming book, Deregulation, Innovation, and Market Liberalization: Electricity Restructuring in a Constantly Evolving Environment (Routledge, 2008), comprise a different paradigm – decentralized economic and physical coordination – which will be achieved through contracts, transactions, price signals and integrated intertemporal wholesale and retail markets. Digital communication technologies – which are becoming ever more pervasive and affordable – are what make this decentralized coordination possible. In contrast to the “distributed control” concept often invoked by power systems engineers (in which distributed technology is used to enhance centralized control of a system), “decentralized coordination” represents a paradigm in which distributed agents themselves control part of the system, and in aggregate, their actions produce order: emergent order. [1]

Dynamic retail pricing, retail product differentiation and complementary end-use technologies provide the foundation for achieving decentralized coordination in the electric power industry. They bring timely information to consumers and enable them to participate in retail market processes; they also enable retailers to discover and satisfy the heterogeneous preferences of consumers, all of whom have private knowledge that’s unavailable to firms and regulators in the absence of such market processes. Institutions that facilitate this discovery through dynamic pricing and technology are crucial for achieving decentralized coordination. Thus, retail restructuring that allows dynamic pricing and product differentiation, doesn’t stifle the adoption of digital technology and reduces retail entry barriers is necessary if this value-creating decentralized coordination is to happen.

This paper presents a case study – the “GridWise Olympic Peninsula Testbed Demonstration Project” – that illustrates how digital end-use technology and dynamic pricing combine to provide value to residential customers while increasing network reliability and reducing required infrastructure investments through decentralized coordination. The availability (and increasing cost-effectiveness) of digital technologies enabling consumers to monitor and control their energy use and to see transparent price signals has made existing retail rate regulation obsolete. Instead, the policy recommendation that this analysis implies is that regulators should reduce entry barriers in retail markets and allow for dynamic pricing and product differentiation, which are the keys to achieving decentralized coordination.

THE KEYS: DYNAMIC PRICING, DIGITAL TECHNOLOGY

Dynamic pricing provides price signals that reflect variations in the actual costs and benefits of providing electricity at different times of the day. Some of the more sophisticated forms of dynamic pricing harness the dramatic improvements in information technology of the past 20 years to communicate these price signals to consumers. These same technological developments also give consumers a tool for managing their energy use, in either manual or automated form. Currently, with almost all U.S. consumers (even industrial and commercial ones) paying average prices, there’s little incentive for consumers to manage their consumption and shift it away from peak hours. This inelastic demand leads to more capital investment in power plants and transmission and distribution facilities than would occur if consumers could make choices based on their preferences and in the face of dynamic pricing.

Retail price regulation stifles the economic processes that lead to both static and dynamic efficiency. Keeping retail prices fixed truncates the information flow between wholesale and retail markets, and leads to inefficiency, price spikes and price volatility. Fixed retail rates for electric power service mean that the prices individual consumers pay bear little or no relation to the marginal cost of providing power in any given hour. Moreover, because retail prices don’t fluctuate, consumers are given no incentive to change their consumption as the marginal cost of producing electricity changes. This severing of incentives leads to inefficient energy consumption in the short run and also causes inappropriate investment in generation, transmission and distribution capacity in the long run. It has also stifled the implementation of technologies that enable customers to make active consumption decisions, even though communication technologies have become ubiquitous, affordable and user-friendly.

Dynamic pricing can include time-of-use (TOU) rates, which are different prices in blocks over a day (based on expected wholesale prices), or real-time pricing (RTP) in which actual market prices are transmitted to consumers, generally in increments of an hour or less. A TOU rate typically applies predetermined prices to specific time periods by day and by season. RTP differs from TOU mainly because RTP exposes consumers to unexpected variations (positive and negative) due to demand conditions, weather and other factors. In a sense, fixed retail rates and RTP are the end points of a continuum of how much price variability the consumer sees, and different types of TOU systems are points on that continuum. Thus, RTP is but one example of dynamic pricing. Both RTP and TOU provide better price signals to customers than current regulated average prices do. They also enable companies to sell, and customers to purchase, electric power service as a differentiated product.

TECHNOLOGY’S ROLE IN RETAIL CHOICE

Digital technologies are becoming increasingly available to reduce the cost of sending prices to people and their devices. The 2007 Galvin Electricity Initiative report “The Path to Perfect Power: New Technologies Advance Consumer Control” catalogs a variety of end-user technologies (from price-responsive appliances to wireless home automation systems) that can communicate electricity price signals to consumers, retain data on their consumption and be programmed to respond automatically to trigger prices that the consumer chooses based on his or her preferences. [2] Moreover, the two-way communication advanced metering infrastructure (AMI) that enables a retailer and consumer to have that data transparency is also proliferating (albeit slowly) and declining in price.

Dynamic pricing and the digital technology that enables communication of price information are symbiotic. Dynamic pricing in the absence of enabling technology is meaningless. Likewise, technology without economic signals to respond to is extremely limited in its ability to coordinate buyers and sellers in a way that optimizes network quality and resource use. [3] The combination of dynamic pricing and enabling technology changes the value proposition for the consumer from “I flip the switch, and the light comes on” to a more diverse and consumer-focused set of value-added services.

These diverse value-added services empower consumers and enable them to control their electricity choices with more granularity and precision than the environment in which they think solely of the total amount of electricity they consume. Digital metering and end-user devices also decrease transaction costs between buyers and sellers, lowering barriers to exchange and to the formation of particular markets and products.

Whether they take the form of building control systems that enable the consumer to see the amount of power used by each function performed in a building or appliances that can be programmed to behave differently based on changes in the retail price of electricity, these products and services provide customers with an opportunity to make better choices with more precision than ever before. In aggregate, these choices lead to better capacity utilization and better fuel resource utilization, and provide incentives for innovation to meet customers’ needs and capture their imaginations. In this sense, technological innovation and dynamic retail electricity pricing are at the heart of decentralized coordination in the electric power network.

EVIDENCE

Led by the Pacific Northwest National Laboratory (PNNL), the Olympic Peninsula GridWise Testbed Project served as a demonstration project to test a residential network with highly distributed intelligence and market-based dynamic pricing. [4] Washington’s Olympic Peninsula is an area of great scenic beauty, with population centers concentrated on the northern edge. The peninsula’s electricity distribution network is connected to the rest of the network through a single distribution substation. While the peninsula is experiencing economic growth and associated growth in electricity demand, the natural beauty of the area and other environmental concerns served as an impetus for area residents to explore options beyond simply building generation capacity on the peninsula or adding transmission capacity.

Thus, this project tested how the combination of enabling technologies and market-based dynamic pricing affected utilization of existing capacity, deferral of capital investment and the ability of distributed demand-side and supply-side resources to create system reliability. Two questions were of primary interest:

1) What dynamic pricing contracts do consumers find attractive, and how does enabling technology affect that choice?

2) To what extent will consumers choose to automate energy use decisions?

The project – which ran from April 2006 through March 2007 – included 130 broadband-enabled households with electric heating. Each household received a programmable communicating thermostat (PCT) with a visual user interface that allowed the consumer to program the thermostat for the home – specifically to respond to price signals, if desired. Households also received water heaters equipped with a GridFriendly appliance (GFA) controller chip developed at PNNL that enables the water heater to receive price signals and be programmed to respond automatically to those price signals. Consumers could control the sensitivity of the water heater through the PCT settings.

These households also participated in a market field experiment involving dynamic pricing. While they continued to purchase energy from their local utility at a fixed, discounted price, they also received a cash account with a predetermined balance, which was replenished quarterly. The energy use decisions they made would determine their overall bill, which was deducted from their cash account, and they were able to keep any difference as profit. The worst a household could do was a zero balance, so they were no worse off than if they had not participated in the experiment. At any time customers could log in to a secure website to see their current balances and determine the effectiveness of their energy use strategies.

On signing up for the project, the households received extensive information and education about the technologies available to them and the kinds of energy use strategies facilitated by these technologies. They were then asked to choose a retail pricing contract from three options: a fixed price contract (with an embedded price risk premium), a TOU contract with a variable critical peak price (CPP) component that could be called in periods of tight capacity or an RTP contract that would reflect a wholesale market-clearing price in five-minute intervals. The RTP was determined using a uniform price double auction in which buyers (households and commercial) submit bids and sellers submit offers simultaneously. This project represented the first instance in which a double auction retail market design was tested in electric power.

The households ranked the contracts and were then divided fairly evenly among the three types, along with a control group that received the enabling technologies and had their energy use monitored but did not participate in the dynamic pricing market experiment. All households received either their first or second choice; interestingly, more than two-thirds of the households ranked RTP as their first choice. This result counters the received wisdom that residential customers want only reliable service at low, stable prices.

According to the 2007 report on the project by D.J. Hammerstrom (and others), on average participants saved 10 percent on their electricity bills. [5] That report also includes the following findings about the project:

Result 1. For the RTP group, peak consumption decreased by 15 to 17 percent relative to what the peak would have been in the absence of the dynamic pricing – even though their overall energy consumption increased by approximately 4 percent. This flattening of the load duration curve indicates shifting some peak demand to nonpeak hours. Such shifting increases the system’s load factor, improving capacity utilization and reducing the need to invest in additional capacity, for a given level of demand. A 15 to 17 percent reduction is substantial and is similar in magnitude to the reductions seen in other dynamic pricing pilots.

After controlling for price response, weather effects and weekend days, the RTP group’s overall energy consumption was 4 percent higher than that of the fixed price group. This result, in combination with the load duration effect noted above, indicates that the overall effect of RTP dynamic pricing is to smooth consumption over time, not decrease it.

Result 2. The TOU group achieved both a large price elasticity of demand (-0.17), based on hourly data, and an overall energy reduction of approximately 20 percent relative to the fixed price group.

After controlling for price response, weather effects and weekend days, the TOU group’s overall energy consumption was 20 percent lower than that of the fixed price group. This result indicates that the TOU (with occasional critical peaks) pricing induced overall conservation – a result consistent with the results of the California SPP project. The estimated price elasticity of demand in the TOU group was -0.17, which is high relative to that observed in other projects. This elasticity suggests that the pricing coupled with the enabling end-use technology amplifies the price responsiveness of even small residential consumers.

Despite these results, dynamic pricing and enabling technologies are proliferating slowly in the electricity industry. Proliferation requires a combination of formal and informal institutional change to overcome a variety of barriers. And while formal institutional change (primarily in the form of federal legislation) is reducing some of these barriers, it remains an incremental process. The traditional rate structure, fixed by state regulation and slow to change, presents a substantial barrier. Predetermined load profiles inhibit market-based pricing by ignoring individual customer variation and the information that customers can communicate through choices in response to price signals. Furthermore, the persistence of standard offer service at a discounted rate (that is, a rate that does not reflect the financial cost of insurance against price risk) stifles any incentive customers might have to pursue other pricing options.

The most significant – yet also most intangible and difficult-to-overcome – obstacle to dynamic pricing and enabling technologies is inertia. All of the primary stakeholders in the industry – utilities, regulators and customers – harbor status quo bias. Incumbent utilities face incentives to maintain the regulated status quo as much as possible (given the economic, technological and demographic changes surrounding them) – and thus far, they’ve been successful in using the political process to achieve this objective.

Customer inertia also runs deep because consumers have not had to think about their consumption of electricity or the price they pay for it – a bias consumer advocates generally reinforce by arguing that low, stable prices for highly reliable power are an entitlement. Regulators and customers value the stability and predictability that have arisen from this vertically integrated, historically supply-oriented and reliability-focused environment; however, what is unseen and unaccounted for is the opportunity cost of such predictability – the foregone value creation in innovative services, empowerment of customers to manage their own energy use and use of double-sided markets to enhance market efficiency and network reliability. Compare this unseen potential with the value creation in telecommunications, where even young adults can understand and adapt to cell phone-pricing plans and benefit from the stream of innovations in the industry.

CONCLUSION

The potential for a highly distributed, decentralized network of devices automated to respond to price signals creates new policy and research questions. Do individuals automate sending prices to devices? If so, do they adjust settings, and how? Does the combination of price effects and innovation increase total surplus, including consumer surplus? In aggregate, do these distributed actions create emergent order in the form of system reliability?

Answering these questions requires thinking about the diffuse and private nature of the knowledge embedded in the network, and the extent to which such a network becomes a complex adaptive system. Technology helps determine whether decentralized coordination and emergent order are possible; the dramatic transformation of digital technology in the past few decades has decreased transaction costs and increased the extent of feasible decentralized coordination in this industry. Institutions – which structure and shape the contexts in which such processes occur – provide a means for creating this coordination. And finally, regulatory institutions affect whether or not this coordination can occur.

For this reason, effective regulation should focus not on allocation but rather on decentralized coordination and how to bring it about. This in turn means a focus on market processes, which are adaptive institutions that evolve along with technological change. Regulatory institutions should also be adaptive, and policymakers should view regulatory policy as work in progress so that the institutions can adapt to unknown and changing conditions and enable decentralized coordination.

ENDNOTES

1. Order can take many forms in a complex system like electricity – for example, keeping the lights on (short-term reliability), achieving economic efficiency, optimizing transmission congestion, longer-term resource adequacy and so on.

2. Roger W. Gale, Jean-Louis Poirier, Lynne Kiesling and David Bodde, “The Path to Perfect Power: New Technologies Advance Consumer Control,” Galvin Electricity Initiative report (2007). www.galvinpower.org/resources/galvin.php?id=88

3. The exception to this claim is the TOU contract, where the rate structure is known in advance. However, even on such a simple dynamic pricing contract, devices that allow customers to see their consumption and expenditure in real time instead of waiting for their bill can change behavior.

4. D.J. Hammerstrom et. al, “Pacific Northwest GridWise Testbed Demonstration Projects, volume I: The Olympic Peninsula Project” (2007). http://gridwise.pnl.gov/docs/op_project_final_report_pnnl17167.pdf

5. Ibid.

Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.

CATEGORIES AND RELATIONSHIPS

There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.

AVAILABLE TOOLS

Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.

THE GOAL OF IMPLEMENTING ANALYTICS

Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.

THE IMPORTANCE OF STANDARDS

The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.

CONCLUSION

To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.

The GridWise Olympic Peninsula Project

The Olympic Peninsula Project consisted of a field demonstration and test of advanced price signal-based control of distributed energy resources (DERs). Sponsored by the U.S. Department of Energy (DOE) and led by the Pacific Northwest National Laboratory, the project was part of the Pacific Northwest Grid- Wise Testbed Demonstration.

Other participating organizations included the Bonneville Power Administration, Public Utility District (PUD) #1 of Clallam County, the City of Port Angeles, Portland General Electric, IBM’s T.J. Watson Research Center, Whirlpool and Invensys Controls. The main objective of the project was to convert normally passive loads and idle distributed generation into actively participating resources optimally coordinated in near real time to reduce stress on the local distribution system.

Planning began in late 2004, and the bulk of the development work took place in 2005. By late 2005, equipment installations had begun, and by spring 2006, the experiment was fully operational, remaining so for one full year.

The motivating theme of the project was based on the GridWise concept that inserting intelligence into electric grid components at every point in the supply chain – from generation through end-use – will significantly improve both the electrical and economic efficiency of the power system. In this case, information technology and communications were used to create a real-time energy market system that could control demand response automation and distributed generation dispatch. Optimal use of the DER assets was achieved through the market, which was designed to manage the flow of power through a constrained distribution feeder circuit.

The project also illustrated the value of interoperability in several ways, as defined by the DOE’s GridWise Architecture Council (GWAC). First, a highly heterogeneous set of energy assets, associated automation controls and business processes was composed into a single solution integrating a purely economic or business function (the market-clearing system) with purely physical or operational functions (thermostatic control of space heating and water heating). This demonstrated interoperability at the technical and informational levels of the GWAC Interoperability Framework (www.gridwiseac.org/about/publications.aspx), providing an ideal example of a cyber-physical-business system. In addition, it represents an important class of solutions that will emerge as part of the transition to smart grids.

Second, the objectives of the various asset owners participating in the market were continuously balanced to maintain the optimal solution at any point in time. This included the residential demand response customers; the commercial and municipal entities with both demand response and distributed generation; and the utilities, which demonstrated interoperability at the organizational level of the framework.

PROJECT RESOURCES

The following energy assets were configured to respond to market price signals:

  • Residential demand response for electric space and water heating in 112 single-family homes using gateways connected by DSL or cable modem to provide two-way communication. The residential demand response system allowed the current market price of electricity to be presented to customers. Consumers could also configure their demand response automation preferences. The residential consumers were evenly divided among three contract types (fixed, time of use and real time) and a fourth control group. All electricity consumption was metered, but only the loads in price-responsive homes were controlled by the project (approximately 75 KW).
  • Two distributed generation units (175 KW and 600 KW) at a commercial site served the facility’s load when the feeder supply was not sufficient. These units were not connected in parallel to the grid, so they were bid into the market as a demand response asset equal to the total load of the facility (approximately 170 KW). When the bid was satisfied, the facility disconnected from the grid and shifted its load to the distributed generation units.
  • One distributed microturbine (30 KW) that was connected in parallel to the grid. This unit was bid into the market as a generation asset based on the actual fixed and variable expenses of running the unit.
  • Five 40-horsepower (HP) water pumps distributed between two municipal water-pumping stations (approximately 150 KW of total nameplate load). The demand response load from these pumps was incrementally bid into the market based on the water level in the pumped storage reservoir, effectively converting the top few feet of the reservoir capacity into a demand response asset on the electrical grid.

Monitoring was performed for all of these resources, and in cases of price-responsive contracts, automated control of demand response was also provided. All consumers who employed automated control were able to temporarily disable or override project control of their loads or generation units. In the residential realtime price demand response homes, consumers were given a simple configuration choice for their space heating and water heating that involved selecting an ideal set point and a degree of trade-off between comfort and price responsiveness.

For real-time price contracts, the space heater demand response involved automated bidding into the market by the space heating system. Since the programmable thermostats deployed in the project didn’t support real-time market bidding, IBM Research implemented virtual thermostats in software using an event-based distributed programming prototype called Internet- Scale Control Systems (iCS). The iCS prototype is designed to support distributed control applications that span virtually any underlying device or business process through the definition of software sensor, actuator and control objects connected by an asynchronous event programming model that can be deployed on a wide range of underlying communication and runtime environments. For this project, virtual thermostats were defined that conceptually wrapped the real thermostats and incorporated all of their functionality while at the same time providing the additional functionality needed to implement the real-time bidding. These virtual thermostats received
the actual temperature of the house as well as information about the real-time market average price and price distribution and the consumer’s preferences for set point and comfort/economy trade-off setting. This allowed the virtual thermostats to calculate the appropriate bid every five minutes based on the changing temperature and market price of energy.

The real-time market in the project was implemented as a shadow market – that is, rather than change the actual utility billing structure, the project implemented a parallel billing system and a real-time market. Consumers still received their normal utility bill each month, but in addition they received an online bill from the shadow market. This additional bill was paid from a debit account that used funds seeded by the project based on historical energy consumption information for the consumer.

The objective was to provide an economic incentive to consumers to be more price responsive. This was accomplished by allowing the consumers to keep the remaining balance in the debit account at the end of each quarter. Those consumers who were most responsive were estimated to receive about $150 at the end of the quarter.

The market in the project cleared every five minutes, having received demand response bids, distributed generation bids and a base supply bid based on the supply capacity and wholesale price of energy in the Mid-Columbia system operated by Bonneville Power Administration. (This was accomplished through a Dow Jones feed of the Mid-Columbia price and other information sources for capacity.) The market operation required project assets to submit bids every five minutes into the market, and then respond to the cleared price at the end of the five-minute market cycle. In the case of residential space heating in real-time price contract homes, the virtual thermostats adjusted the temperature set point every five minutes; however, in most cases the adjustment was negligible (for example, one-tenth of a degree) if the price was stable.

KEY FINDINGS

Distribution constraint management. As one of the primary objectives of the experiment, distribution constraint management was successfully accomplished. The distribution feeder-imported capacity was managed through demand response automation to a cap of 750 KW for all but one five-minute market cycle during the project year. In addition, distributed generation was dispatched as needed during the project, up to a peak of about 350 KW.

During one period of about 40 hours that took place from Oct. 30, 2006, to Nov. 1, 2006, the system successfully constrained the feeder import capacity at its limit and dispatched distributed generation several times, as shown in Figure 1. In this figure, actual demand under real-time price control is shown in red, while the blue line depicts what demand would have been without real-time price control. It should be noted that the red demand line steps up and down above the feeder capacity line several times during the event – this is the result of distributed generation units being dispatched and removed as their bid prices are met or not.

Market-based control demonstrated. The project controlled both heating and cooling loads, which showed a surprisingly significant shift in energy consumption. Space conditioning loads in real-time price contract homes demonstrated a significant shift to early morning hours – a shift that occurred during both constrained and unconstrained feeder conditions but was more pronounced during constrained periods. This is similar to what one would expect in preheating or precooling systems, but neither the real nor the virtual thermostats in the project had any explicit prediction capability. The analysis showed that the diurnal shape of the price curve itself caused the effect.

Peak load reduced. The project’s realtime price control system both deferred and shifted peak load very effectively. Unlike the time-of-use system, the realtime price control system operated at a fine level of precision, responding only when constraints were present and resulting in a precise and proportionally appropriate level of response. The time-of-use system, on the other hand, was much coarser in its response and responded regardless of conditions on the grid, since it was only responding to preconfiured time schedules or manually initiated critical peak price signals.

Internet-based control demonstrated. Bids and control of the distributed energy resources in the project were implemented over Internet connections. As an example, the residential thermostats modified their operation through a combination of local and central control communicated as asynchronous events over the Internet. Even in situations of intermittent communication failure, resources typically performed well in default mode until communications could be re-established. This example of the resilience of a well-designed, loosely coupled distributed control application schema is an important aspect of what the project demonstrated.

Distributed generation served as a valuable resource. The project was highly effective in using the distributed generation units, dispatching them many times over the duration of the experiment. Since the diesel generators were restricted by environmental licensing regulations to operate no more than 100 hours per year, the bid calculation factored in a sliding scale price premium such that bids would become higher as the cumulative runtime for the generators increased toward 100 hours.

CONCLUSION

The Olympic Peninsula Project was unique in many ways. It clearly demonstrated the value of the GridWise concepts of leveraging information technology and incorporating market constructs to manage distributed energy resources. Local marginal price signals as implemented through the market clearing process, and the overall event-based software integration framework successfully managed the bidding and dispatch of loads and balanced the issues of wholesale costs, distribution congestion and customer needs in a very natural fashion.

The final report (as well as background material) on the project is available at www.gridwise.pnl.gov. The report expands on the remarks in this article and provides detailed coverage of a number of important assertions supported by the project, including:

  • Market-based control was shown to be a viable and effective tool for managing price-based responses from single-family premises.
  • Peak load reduction was successfully accomplished.
  • Automation was extremely important in obtaining consistent responses from both supply and demand resources.
  • The project demonstrated that demand response programs could be designed by establishing debit account incentives without changing the actual energy prices offered by energy providers.

Although technological challenges were identified and noted, the project found no fundamental obstacles to implementing similar systems at a much larger scale. Thus, it’s hoped that an opportunity to do so will present itself at some point in the near future.

Pepco Holdings, Inc.

The United States and the world are facing two preeminent energy challenges: the rising cost of energy and the impact of increasing energy use on the environment. As a regulated public utility and one of the largest energy delivery companies in the Mid-Atlantic region, Pepco Holdings Inc. (PHI) recognized that it was uniquely positioned to play a leadership role in helping meet both of these challenges.

PHI calls the plan it developed to meet these challenges the Blueprint for the Future (Blueprint). The plan builds on work already begun through PHI’s Utility of the Future initiative, as well as other programs. The Blueprint focuses on implementing advanced technologies and energy efficiency programs to improve service to its customers and enable them to manage their energy use and costs. By providing tools for nearly 2 million customers across three states and the district of Columbia to better control their electricity use, PHI believes it can make a major contribution to meeting the nation’s energy and environmental challenges, and at the same time help customers keep their electric and natural gas bills as low as possible.

The PHI Blueprint is designed to give customers what they want: reasonable and stable energy costs, responsive customer service, power reliability and environmental stewardship.

PHI is deploying a number of innovative technologies. Some, such as its automated distribution system, help to improve reliability and workforce productivity. Other systems, including an advanced metering infrastructure (AMI), will enable customers to monitor and control their electricity use, reduce their energy costs and gain access to innovative rate options.

PHI’s Blueprint is both ambitious and complex. Over the next five years PHI will be deploying new technologies, modifying and/or creating numerous information systems, redefining customer and operating work processes, restructuring organizations, and managing relationships with customers and regulators in four jurisdictions. PHI intends to do all of this while continuing to provide safe and reliable energy service to its customers.

To assist in developing and executing this plan, PHI reached out to peer utilities and vendors. One significant “partner” group is the Global Intelligent Utility network Coalition (GIUNC), established by IBM, which currently includes CenterPoint Energy (Texas), Country Energy (new South Wales, Australia) and PHI.

Leveraging these resources and others, PHI managers spent much of 2007 compiling detailed plans for realizing the Blueprint. Several aspects of these planning efforts are described below.

VISION AND DESIGN

In 2007, multiple initiatives were launched to flesh out the many aspects of the Blueprint. As Figure 1 illustrates, all of the initiatives were related and designed to generate a deployment plan based on a comprehensive review of the business and technical aspects of the project.

At this early stage, PHI does not yet have all the answers. Indeed, prematurely committing to specific technologies or designs for work that will not be completed for five years can raise the risk of obsolescence and lost investment. The deployment plan and system map, discussed in more detail below, are intended to serve as a guide. They will be updated and modified as decision points are reached and new information becomes available.

BUSINESS CASE VALIDATION

One of the first tasks was to review and define in detail the business case analyses for the project components. Both benefit assumptions and implementation costs were tested. Reference information (benchmarks) for this review came from a variety of sources: IBM experience in projects of similar scope and type; PHI materials and analysis; experiences reported by other GIUNC members; and other utilities and other publicly available sources. This information was compiled, and a present value analysis was conducted on discounted cash flow and rate of return, as shown in Figure 2.

In addition to an “operational benefits” analysis, PHI and the Brattle Group developed value assessments associated with demand response offerings such as critical peak pricing. With demand response, peak consumption can be reduced and capacity cost avoided. This means lower total energy prices for customers and less new capacity additions in the market. As Figure 2 shows, in even the worst-case scenario for demand response savings, operational and customer benefits will offset the cost of PHI’s AMI investment.

The information from these various cases has since been integrated into a single program management tool. Additional capabilities for optimizing results based on value, cost and schedule were developed. Finally, dynamic relationships between variables were modeled and added to the tool, recognizing that assumptions don’t always remain constant as plans are changed. One example of this would be the likely increase in call center cost per meter when deployment accelerates and customer inquiries increase.

HIGH-LEVEL COMMUNICATIONS ARCHITECTURE DESIGN

To define and develop the communications architecture, PHI deployed a structured approach built around IBM’s proprietary optimal comparative communications architecture methodology (OCCAM). This methodology established the communications requirements for AMI, data architecture and other technologies considered in the Blueprint. Next, an evaluation of existing communications infrastructure and capabilities was conducted, which could be leveraged in support of the new technologies. Then, alternative solutions to “close the gap” were reviewed. Finally, all of this information was incorporated in an analytical tool that matched the most appropriate communication technology within a specified geographic area and business need.

SYSTEM MAP AND INFORMATION MODEL

Defining the data framework and the approach to overall data integration elements across the program areas is essential if companies are to effectively and efficiently implement AMI systems and realize their identified benefits.

To help PHI understand what changes are needed to get from their current state to a shared vision of the future, the project team reviewed and documented the “current state” of the systems impacted by their plans. Then, subject matter experts with expertise in meters, billing, outage, system design, work and workforce management, and business data analysis were engaged to expand on the data architecture information, including information on systems, functions and the process flows that tie them all together. Finally, the information gathered was used to develop a shared vision of how PHI processes, functions, systems and data will fit together in the future.

By comparing the design of as-is systems with the to-be architecture of information management and information flows, PHI identified information gaps and developed a set of next steps. One key step establishes an “enterprise architecture” model for development. The first objective would be to establish and enforce governance policies. With these in place, PHI will define, draft and ratify detailed enterprise architecture and enforce priorities, standards, procedures and processes.

PHASE 2 DEPLOYMENT PLAN

Based on the planning conducted over the last half of the year, a high-level project plan for Phase 2 deployment was compiled. The focus was mainly on Blueprint initiatives, while considering dependencies and constraints reported in other transformation initiatives. PHI subject matter experts, project team leads and experience gathered from other utilities were all leveraged to develop the Blueprint deployment plan.

The deployment plan includes multiple types of tasks; processes; and organization, technical and project management office-related activities, and covers a period of five to six years. Initiatives will be deployed in multiple releases, phased across jurisdictions (Delaware, District of Columbia, Maryland, New Jersey) and coordinated between meter installation and communications infrastructure buildout schedules.

The plan incorporates several initiatives, including process design, system development, communications infrastructure and AMI, and various customer initiatives. Because these initiatives are interrelated and complex, some programmatic initiatives are also called for, including change management, benefits realization and program management. From this deployment plan, more detailed project plans and dependencies are being developed to provide PHI with an end-to-end view of implementation.

As part of the planning effort, key risk areas for the Blueprint program were also defined, as shown in Figure 3. Input from interviews and knowledge leveraged from similar projects were included to ensure a comprehensive understanding of program risks and to begin developing mitigation strategies.

CONCLUSION

As PHI moves forward with implementation of its AMI systems, new issues and challenges are certain to arise, and programmatic elements are being established to respond. A program management office has been established and continues to drive more detail into plans while tracking and reporting progress against active elements. AMI process development is providing the details for business requirements, and system architecture discussions are resolving interface issues.

Deployment is still in its early stages, and much work lies ahead. However, with the effort grounded in a clear vision, the journey ahead looks promising.