Collaborative Policy Making And the Smart Grid

A search on Google for the keywords smart grid returns millions of results. A list of organizations talking about or working on smart grid initiatives would likely yield similar results. Although meant humorously, this illustrates the proliferation of groups interested in redesigning and rebuilding the varied power infrastructure to support the future economy. Since building a smart infrastructure is clearly in the public’s interest, it’s important that all affected stakeholders – from utilities and legislators to consumers and regulators – participate in creating the vision, policies and framework for these critical and important investments.

One organization, the GridWise Alliance, was formed specifically to promote a broad collaborative effort for all interest groups shaping this agenda. Representing a consortium of more than 60 public organizations and private companies, GridWise Alliance members are aligned around a shared vision of a transformed and modern electric system that integrates infrastructure, processes, devices, information and market structure so that energy can be generated, distributed and consumed more reliably and cost-effectively.

From the time of its creation in 2003, the GridWise Alliance has focused on the federal legislative process to ensure that smart grid programs and policies were included in the priorities of the various federal agencies. The Alliance continues to focus on articulating to elected officials, public policy agencies and the private sector the urgent need to build a smarter 21st-century utility infrastructure. Last year, the Alliance provided significant input into the development of smart grid legislation, which was passed by both houses of Congress and signed into law by the President at the end of 2007. The Alliance has evolved to become one of the “go-to” parties for members of Congress and their staffs as they prepare for new legislation aimed at transforming to a modern and intelligent electricity grid.

The Alliance continues to demonstrate its effectiveness in various ways. The chair of the Alliance, Guido Bartels, joins representatives from seven other Alliance member companies in recently being named to the U.S. Department of Energy’s Electricity Advisory Committee (EAC). This organization is being established to “enhance leadership in electricity delivery modernization and provide senior-level counsel to DOE on ways that the nation can meet the many barriers to moving forward, including the deployment of smart grid technologies.” Another major area of focus is the national GridWeek conference. This year’s GridWeek 2008 is focused on “delivering sustainable energy.” The Alliance expects more than 800 participants to discuss and debate topics such as Enabling Energy Efficiency, Smart Grid in a Carbon Economy and Securing the Smart Grid.

Going forward, the Alliance will expand its reach by continuing to broaden its membership and by working with other U.S. stakeholder organizations to provide a richer understanding of the value and impacts of a smart grid. The Alliance is already working with organizations such as the NARUC-FERC Smart Grid Collaborative, the National Council of State legislators (NCSl), the National Governors’ Association (NGA), the American Public Power Association (APPA) and others. Beyond U.S. borders, the Alliance will continue to strengthen its relations with other smart grid organizations like those in the European Union and Australia to ensure that we’re gaining insight and best practices from other markets.

Collaboration such as that exemplified by the Alliance is critical for making effective and impactful public policy. The future of our nation’s electricity infrastructure, economy and, ultimately, health and safety depends on the leadership of organizations such as the GridWise Alliance.

Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.

CATEGORIES AND RELATIONSHIPS

There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.

AVAILABLE TOOLS

Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.

THE GOAL OF IMPLEMENTING ANALYTICS

Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.

THE IMPORTANCE OF STANDARDS

The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.

CONCLUSION

To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.

SmartGridNet Architecture for Utilities

With the accelerating movement toward distributed generation and the rapid shift in energy consumption patterns, today’s power utilities are facing growing requirements for improved management, capacity planning, control, security and administration of their infrastructure and services.

UTILITY NETWORK BUSINESS DRIVERS

These requirements are driving a need for greater automation and control throughout the power infrastructure, from generation through the customer site. In addition, utilities are interested in providing end-customers with new applications, such as advanced metering infrastructure (AMI), online usage reports and outage status. In addition to meeting these requirements, utilities are under pressure to reduce costs and automate operations, as well as protect their infrastructures from service disruption in compliance with homeland security requirements.

To succeed, utilities must seamlessly support these demands with an embedded infrastructure of traditional devices and technologies. This will allow them to provide a smooth evolution to next-generation capabilities, manage life cycle issues for aging equipment and devices, maintain service continuity, minimize capital investment, and ensure scalability and future-proofing for new applications, such as smart metering.

By adopting an evolutionary approach to an intelligent communications network (SmartGridNet), utilities can maximize their ability to leverage the existing asset base and minimize capital and operations expenses.

THE NEED FOR AN INTELLIGENT UTILITY NETWORK

As a first step toward implementing a SmartGridNet, utilities must implement intelligent electronic devices (IEDs) throughout the infrastructure – from generation and transmission through distribution directly to customer premises – if they are to effectively monitor and manage facilities, load and usage. A sophisticated operational communications network then interconnects such devices through control centers, providing support for supervisory control and data acquisition (SCADA), teleprotection, remote meter reading, and operational voice and video. This network also enables new applications such as field personnel management and dispatch, safety and localization. In addition, the utility’s corporate communications network increases employee productivity and improves customer service by providing multimedia; voice, video, and data communications; worker mobility; and contact center capabilities.

These two network types – operational and corporate – and the applications they support may leverage common network facilities; however, they have very different requirements for availability, service assurance, bandwidth, security and performance.

SMARTGRIDNET REQUIREMENTS

Network technology is critical to the evolution of the next-generation utility. The SmartGridNet must support the following key requirements:

  • Virtualization. Enables operation of multiple virtual networks over common infrastructure and facilities while maintaining mutual isolation and distinct levels of service.
  • Quality of service (QoS). Allows priority treatment of critical traffic on a “per-network, per-service, per-user basis.”
  • High availability. Ensures constant availability of critical communications, transparent restoration and “always on” service – even when the public switched telephony network (PSTN) or local power supply suffers outages.
  • Multipoint-to-multipoint communications. Provides integrated control and data collection across multiple sensors and regulators via synchronized, redundant control centers for disaster recovery.
  • Two-way communications. Supports increasingly sophisticated interactions between control centers and end-customers or field forces to enable new capabilities, such as customer sellback, return or credit allocation for locally stored power; improved field service dispatch; information sharing; and reporting.
  • Mobile services. Improves employee efficiency, both within company facilities and in the field.
  • Security. Protects the infrastructure from malicious and inadvertent compromise from both internal and external sources, ensures service reliability and continuity, and complies with critical security regulations such as North American Electric Reliability Corp. (NERC).
  • Legacy service integration. Accommodates the continued presence of legacy remote terminal units (RTUs), meters, sensors and regulators, supporting circuit, X.25, frame relay (FR), and asynchronous transfer mode (ATM) interfaces and communications.
  • Future-proofing. Capability and scalability to meet not just today’s applications, but tomorrow’s, as driven by regulatory requirements (such as smart metering) and new revenue opportunities, such as utility delivery of business and residential telecommunications (U-Telco) services.

SMARTGRIDNET EVOLUTION

A number of network technologies – both wire-line and wireless – work together to achieve these requirements in a SmartGridNet. Utilities must leverage a range of network integration disciplines to engineer a smooth transformation of their existing infrastructure to a SmartGridNet.

The remainder of this paper describes an evolutionary scenario, in which:

  • Next-generation synchronous optical network (SONET)-based multiservice provisioning platforms (MSPPs), with native QoS-enabled Ethernet capabilities are seamlessly introduced at the transport layer to switch traffic from both embedded sensors and next-generation IEDs.
  • Cost-effective wave division multiplexing (WDM) is used to increase communications network capacity for new traffic while leveraging embedded fiber assets.
  • Multiprotocol label switching (MPLS)/ IP routing infrastructure is introduced as an overlay on the transport layer only for traffic requiring higher-layer services that cannot be addressed more efficiently by the transport layer MSPPs.
  • Circuit emulation over IP virtual private networks (VPNs) is supported as a means for carrying sensor traffic over shared or leased network facilities.
  • A variety of communications applications are delivered over this integrated infrastructure to enhance operational efficiency, reliability, employee productivity and customer satisfaction.
  • A toolbox of access technologies is appropriately applied, per specific area characteristics and requirements, to extend power service monitoring and management all the way to the end-customer’s premises.
  • A smart home network offers new capabilities to the end-customer, such as Advanced Metering Infrastructure (AMI), appliance control and flexible billing models.
  • Managed and assured availability, security, performance and regulatory compliance of the communications network.

THE SMARTGRIDNET ARCHITECTURE

Figure 1 provides an architectural framework that we may use to illustrate and map the relevant communications technologies and protocols.

The backbone network in Figure 1 interconnects corporate sites and data centers, control centers, generation facilities, transmission and distribution substations, and other core facilities. It can isolate the distinct operational and corporate communications networks and subnetworks while enforcing the critical network requirements outlined in the section above.

The underlying transport network for this intelligent backbone is made up of both fiber and wireless (for example, microwave) technologies. The backbone also employs ring and mesh architectures to provide high availability and rapid restoration.

INTELLIGENT CORE TRANSPORT

As alluring as pure packet networks may be, synchronous SONET remains a key technology for operational backbones. Only SONET can support the range of new and legacy traffic types while meeting the stringent absolute delay, differential delay and 50-millisecond restoration requirements of real-time traffic.

SONET transport for legacy traffic may be provided in MSPPs, which interoperate with embedded SONET elements to implement ring and mesh protection over fiber facilities and time division multiplexing (TDM)-based microwave. Full-featured Ethernet switch modules in these MSPPs enable next-generation traffic via Ethernet over SONET (EOS) and/or packet over SONET (POS). Appropriate, cost-effective wave division multiplexing (WDM) solutions – for example, coarse, passive and dense WDM – may also be applied to guarantee sufficient capacity while leveraging existing fiber assets.

BACKBONE SWITCHING/ROUTING

From a switching and routing perspective, a significant amount of traffic in the backbone may be managed at the transport layer – for example, via QoS-enabled Ethernet switching capabilities embedded in the SONET-based MSPPs. This is a key capability for supporting expedited delivery of critical traffic types, enabling utilities to migrate to more generic object-oriented substation event (GOOSE)-based inter-substation communications for SCADA and teleprotection in the future in accordance with standards such as IEC 61850.

Where higher-layer services – for example, IP VPN, multicast, ATM and FR – are required, however, utilities can introduce a multi-service switching/routing infrastructure incrementally on top of the transport infrastructure. The switching infrastructure is based on multi-protocol label switching (MPLS), implementing Layer 2 transport encapsulation and/or IP VPNs, per the relevant Internet engineering task force (IETF) requests for comments (RFCs).

This type of unified infrastructure reduces operations costs by sharing switching and restoration capabilities across multiple services. Current IP/MPLS switching technology is consistent with the network requirements summarized above for service traffic requiring higher-layer services, and may be combined with additional advanced services such as Layer 3 VPNs and unified threat management (UTM) devices/firewalls for further protection and isolation of traffic.

CORE COMMUNICATIONS APPLICATIONS

Operational services such as tele-protection and SCADA represent key categories of applications driving the requirements for a robust, secure, cost-effective network as described. Beyond these, there are a number of communications applications enabling improved operational efficiency for the utility, as well as mechanisms to enhance employee productivity and customer service. These include, but are not limited to:

  • Active network controls. Improves capacity and utilization of the electricity network.
  • Voice over IP (VoIP). Leverages common network infrastructure to reduce the cost of operational and corporate voice communications – for example, eliminating costly channel banks for individual lines required at remote substations.
  • Closed circuit TV (CCTV)/Video Over IP. Improves surveillance of remote assets and secure automated facilities.
  • Multimedia collaboration. Combines voice, video and data traffic in a rich application suite to enhance communication and worker productivity, giving employees direct access to centralized expertise and online resources (for example, standards and diagrams).
  • IED interconnection. Better measures and manages the electricity networks.
  • Mobility. Leverages in-plant and field worker mobility – via cellular, land mobile radio (LMR) and WiFi – to improve efficiency of key work processes.
  • Contact center. Employs next-generation communications and best-in-class customer service business processes to improve customer satisfaction.

DISTRIBUTION AND ACCESS NETWORKS

The intelligent utility distribution and access networks are subtending networks from the backbone, accommodating traffic between backbone switches/applications and devices in the distribution infrastructure all the way to the customer premises. IEDs on customer premises include automated meters and device regulators to detect and manage customer power usage.

These new devices are primarily packet-based. They may, therefore, be best supported by packet-based access network technologies. However, for select rings, TDM may also be chosen, as warranted. The packet-based access network technology chosen depends on the specifics of the sites to be connected and the economics associated with that area (for example, right of way, customer densities and embedded infrastructure).

Regardless of the access and last-mile network designs, traffic ultimately arrives at the network via an IP/MPLS edge switch/router with connectivity to the backbone IP/MPLS infrastructure. This switching/routing infrastructure ensures connectivity among the intelligent edge devices, core capabilities and control applications.

THE SMART HOME NETWORK

A futuristic home can support many remotely controlled and managed appliances centered on lifestyle improvements of security, entertainment, health and comfort (see Figure 2). In such a home, applications like smart meters and appliance control could be provided by application service providers (ASPs) (such as smart meter operators or utilities), using a home service manager and appropriate service gateways. This architecture differentiates between the access provider – that is, the utility/U-Telco or other public carrier – and the multiple ASPs who may provide applications to a home via the access provider.

FLEXIBLE CHARGING

By employing smart meters and developing the ability to retrieve electricity usage data at regular intervals – potentially several readings per hour – retailers could make billing a significant competitive differentiator. detailed usage information has already enabled value-added billing in the telecommunications world, and AMI can do likewise for billing electricity services. In time, electricity users will come to expect the same degree of flexible charging with their electricity bill that they already experience with their telephone bills, including, for example, prepaid and post-paid options, tariff in function of time, automated billing for house rental (vacation), family or group tariffs, budget tariffs and messaging.

MANAGING THE COMMUNICATIONS NETWORK

For utilities to leverage the communications network described above to meet business key requirements, they must intelligently manage that network’s facilities and services. This includes:

  • Configuration management. Provisioning services to ensure that underlying switching/routing and transport requirements are met.
  • Fault and performance management. Monitoring, correlating and isolating fault and performance data so that proactive, preventative and reactive corrective actions can be initiated.
  • Maintenance management. Planning of maintenance activities, including material management and logistics, and geographic information management.
  • Restoration management. Creating trouble tickets, dispatching and managing the workforce, and carrying out associated tracking and reporting.
  • Security management. Assuring the security of the infrastructure, managing access to authorized users, responding to security events, and identifying and remediating vulnerabilities per key security requirements such as NERC.

Utilities can integrate these capabilities into their existing network management infrastructures, or they can fully or partially outsource them to managed network service providers.

Figure 3 shows how key technologies are mapped to the architectural framework described previously. Being able to evolve into an intelligent utilities network in a cost-effective manner requires trusted support throughout planning, design, deployment, operations and maintenance.

CONCLUSION

Utilities can evolve their existing infrastructures to meet key SmartGridnet requirements by effectively leveraging a range of technologies and approaches. Through careful planning, designing, engineering and application of this technology, such firms may achieve the business objectives of SmartGridnet while protecting their current investments in infrastructure. Ultimately, by taking an evolutionary approach to SmartGridnet, utilities can maximize their ability to leverage the existing asset base as well as minimize capital and operations expenses.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.

The Customer-Focused Utility

THE CHANGING DYNAMICS OF CUSTOMER RELATIONSHIPS

The utilities industry is in transition. External factors – including shifts in governmental policies, a globally felt sense of urgency about conserving energy, advances in power generation techniques and new technologies – are driving massive changes throughout the industry. Utilities are also under internal pressure to prevent profit margins from eroding. But most significantly, utilities must evolve to compete in a marketplace where consumers increasingly expect high-quality customer service and believe that no company deserves their unconditional loyalty if it cannot perform to expectations. These pressures are putting many utility providers into seriously competitive, market-driven situations where the customer experience becomes a primary differentiator.

In the past, utility companies had very limited interactions with customers. Apart from opening new accounts and billing for services, the relationship was remote, with customers giving no more thought to their power provider than they would to finding a post office. Consumers were indifferent to greenhouse gas (GHG) emissions and essentially took a passive view of all utility functions, only contacting the utility if their lights temporarily went out.

In contrast, the utility of the future can expect a much more intense level of customer involvement. If utilities embrace programs to change customers’ behaviors – for example, by implementing time-of-use rates – customers will need more information on a timelier basis in order to make educated decisions. In addition, customers will expect higher levels of service to keep up with changes in the rest of the commercial world. As consumers get used to checking their bank account and credit card balances via mobile devices, they’ll soon expect the same from all similar services, including their utility company. As younger consumers (Generation Y and now Generation Z) begin their relationships with utilities, they bring expectations of a digital, mobile and collaborative customer service experience. Taking a broader perspective, most age segments – even baby boomers – will begin demanding these new multichannel experiences at times that are convenient for them.

The most significant industry shifts will alter the level of interaction between the utility grid and the home. In the past, this was a one-way street; in the future, however, more households will be adopting “participatory generation” due to their increased use of renewable energy. This will require a more sophisticated home/ grid relationship, in order to track the give and take of power between consumers as both users and generators. This shift will likely change the margin equation for most utility companies.

Customer Demands Drive Technology Change; Technology Change Drives Customer Demand

Utilities are addressing these and other challenges by implementing new business models that are supported by new technologies. The most visible – and arguably the most important – of the new technologies are advanced metering infrastructure (AMI) and the technical components of the smart grid, which integrates AMI with distribution automation and other technologies to connect a utility’s equipment, devices, systems, customers, partners and employees. The integration of these technologies with customer information systems (CIS) and other customer relationship management (CRM) tools will increase consumer control of energy expenditures. Most companies in the industry will need to shift away from the “ratepayer” approach they currently use to serve residential and small business customers, and adapt to changing consumer behavior and emerging business models enabled by new network and generation technologies.

Impacts on the Customer Experience

There are multiple paths to smart grid deployment, all of which utility firms have employed to leverage new sources of data on power demand. If we consider a gradual transformation from today’s centralized, one-way view to a network that is both distributed and dynamic, we can begin to project how technological shifts will impact the utility-consumer relationship, as illustrated in Figure 1.

The future industry value chain for grid-connected customers will have the same physical elements and flow as the current one but be able to provide many more information-oriented elements. Consequently, the shift to a customer-focused view will have serious implications for data management. These include a proliferation of data as well as new mandates for securely tracking, updating, accessing, analyzing and ensuring quality.

In addition, utilities must develop customer experience capabilities in parallel with extending their energy information management capabilities. Taking the smart grid path requires customers to be more involved, as decision-making responsibility shifts more toward the consumer, as depicted in Figure 2.

It’s also important to consider some of the new interactions that consumers will have with their utility company. Some of these will be viewed as “features” of the new technology, whereas others may significantly change how consumers view their relationship with their energy provider. Still others will have a profound impact on how data is captured and deployed within the organization. These interactions may include:

  • Highly detailed, timely and accurate individuated customer information;
  • Interaction between the utility and smart devices – including the meter – in the home (possibly based on customers’ preferences);
  • Seamless, bidirectional, individual communication permitting an extended dialogue across multiple channels such as short message service, integrated voice response, portals and customer care;
  • Rapid (real-time) analysis of prior usage, current usage and prediction of future usage under multiple usage and tariff models;
  • Information presented in a customer-friendly manner;
  • Analytical tools that enable customers to model their consumption behavior and understand the impact of changes on energy cost and carbon footprint;
  • Ability to access and integrate a wide range of external information sources, and present pertinent selections to a customer;
  • Integration of information flow from field operations to the customer call center infrastructure; and
  • Highly skilled, knowledgeable contact center agents who can not only provide accurate information but can advise and recommend products, services, rate plans or changes in consumption profiles.

Do We Need to Begin Thinking About Customers Differently?

Two primary factors will determine the nature of the interface between utilities and consumers in the future. The first is the degree to which consumers will take the initiative in making decisions about the energy supply and their own energy consumption. Second, the amount and percentage of consumers’ disposable income that they allocate to energy will directly influence their consumption and conservation choices, as shown in Figure 3.

How Do Utilities Influence Customers’ Behavior?

One of the major benefits of involving energy customers in generation and consumption decisions is that it can serve to decrease base load. Traditionally, utilities have taken two basic approaches to accomplishing this: coercion and enticement. Coercion is a penalty-based approach for inducing a desired behavior. For example, utilities may charge higher rates for peak period usage, forcing customers to change the hours when they consume power or pay more for peak period usage. The risks of this approach include increased customer dissatisfaction and negative public and regulatory opinion.

Enticement, on the other hand, is an incentive-based approach for driving a desired behavior. For example, utilities could offer cost savings to customers who shift power consumption to off-peak times. The risks associated with this approach include low customer involvement, because incentives may not be enough to overcome the inconvenience to customers.

Both of these approaches have produced results in the past, but neither will necessarily work in the new, more interactive environment. A number of other strategies may prove more effective in the future. For example, customer goal achievement may be one way to generate positive behavior. This model offers benefits to customers by making it easier for them to achieve their own energy consumption or conservation goals. It also gives customers the feeling that they have choices – which promotes a more positive relationship between the customer and the utility. Ease of use represents another factor that influences customer behavior. Companies can accomplish this by creating programs and interfaces that make it simple for the customer to analyze information and make decisions.

There is no “silver bullet” approach to successfully influencing all customers in all utility environments. Often, each customer segment must be treated differently, and each utility company will need to develop a unique customer experience strategy and plan that fits the needs of its unique business situation. The variables will include macro factors such as geography, customer econo-graphics and energy usage patterns; however, they’ll also involve more nuanced attributes such as customer service experiences, customer advocacy attitudes and their individual emotional dispositions.

CONCLUSION

Most utilities considering implementing advanced metering or broader smart grid efforts focus almost exclusively on deploying new technologies. However, they also need to consider customer behavior. Utilities must adopt a new approach that expands the scope of their strategic road map by integrating the “voice of the customer” into the technology planning and deployment process.

By carefully examining a utility customer’s expectations and anticipating the customer impacts brought on by innovative technologies, smart utility companies can get ahead of the customer experience curve, drive more value to the bottom line and ultimately become truly customer focused.

Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.

The GridWise Olympic Peninsula Project

The Olympic Peninsula Project consisted of a field demonstration and test of advanced price signal-based control of distributed energy resources (DERs). Sponsored by the U.S. Department of Energy (DOE) and led by the Pacific Northwest National Laboratory, the project was part of the Pacific Northwest Grid- Wise Testbed Demonstration.

Other participating organizations included the Bonneville Power Administration, Public Utility District (PUD) #1 of Clallam County, the City of Port Angeles, Portland General Electric, IBM’s T.J. Watson Research Center, Whirlpool and Invensys Controls. The main objective of the project was to convert normally passive loads and idle distributed generation into actively participating resources optimally coordinated in near real time to reduce stress on the local distribution system.

Planning began in late 2004, and the bulk of the development work took place in 2005. By late 2005, equipment installations had begun, and by spring 2006, the experiment was fully operational, remaining so for one full year.

The motivating theme of the project was based on the GridWise concept that inserting intelligence into electric grid components at every point in the supply chain – from generation through end-use – will significantly improve both the electrical and economic efficiency of the power system. In this case, information technology and communications were used to create a real-time energy market system that could control demand response automation and distributed generation dispatch. Optimal use of the DER assets was achieved through the market, which was designed to manage the flow of power through a constrained distribution feeder circuit.

The project also illustrated the value of interoperability in several ways, as defined by the DOE’s GridWise Architecture Council (GWAC). First, a highly heterogeneous set of energy assets, associated automation controls and business processes was composed into a single solution integrating a purely economic or business function (the market-clearing system) with purely physical or operational functions (thermostatic control of space heating and water heating). This demonstrated interoperability at the technical and informational levels of the GWAC Interoperability Framework (www.gridwiseac.org/about/publications.aspx), providing an ideal example of a cyber-physical-business system. In addition, it represents an important class of solutions that will emerge as part of the transition to smart grids.

Second, the objectives of the various asset owners participating in the market were continuously balanced to maintain the optimal solution at any point in time. This included the residential demand response customers; the commercial and municipal entities with both demand response and distributed generation; and the utilities, which demonstrated interoperability at the organizational level of the framework.

PROJECT RESOURCES

The following energy assets were configured to respond to market price signals:

  • Residential demand response for electric space and water heating in 112 single-family homes using gateways connected by DSL or cable modem to provide two-way communication. The residential demand response system allowed the current market price of electricity to be presented to customers. Consumers could also configure their demand response automation preferences. The residential consumers were evenly divided among three contract types (fixed, time of use and real time) and a fourth control group. All electricity consumption was metered, but only the loads in price-responsive homes were controlled by the project (approximately 75 KW).
  • Two distributed generation units (175 KW and 600 KW) at a commercial site served the facility’s load when the feeder supply was not sufficient. These units were not connected in parallel to the grid, so they were bid into the market as a demand response asset equal to the total load of the facility (approximately 170 KW). When the bid was satisfied, the facility disconnected from the grid and shifted its load to the distributed generation units.
  • One distributed microturbine (30 KW) that was connected in parallel to the grid. This unit was bid into the market as a generation asset based on the actual fixed and variable expenses of running the unit.
  • Five 40-horsepower (HP) water pumps distributed between two municipal water-pumping stations (approximately 150 KW of total nameplate load). The demand response load from these pumps was incrementally bid into the market based on the water level in the pumped storage reservoir, effectively converting the top few feet of the reservoir capacity into a demand response asset on the electrical grid.

Monitoring was performed for all of these resources, and in cases of price-responsive contracts, automated control of demand response was also provided. All consumers who employed automated control were able to temporarily disable or override project control of their loads or generation units. In the residential realtime price demand response homes, consumers were given a simple configuration choice for their space heating and water heating that involved selecting an ideal set point and a degree of trade-off between comfort and price responsiveness.

For real-time price contracts, the space heater demand response involved automated bidding into the market by the space heating system. Since the programmable thermostats deployed in the project didn’t support real-time market bidding, IBM Research implemented virtual thermostats in software using an event-based distributed programming prototype called Internet- Scale Control Systems (iCS). The iCS prototype is designed to support distributed control applications that span virtually any underlying device or business process through the definition of software sensor, actuator and control objects connected by an asynchronous event programming model that can be deployed on a wide range of underlying communication and runtime environments. For this project, virtual thermostats were defined that conceptually wrapped the real thermostats and incorporated all of their functionality while at the same time providing the additional functionality needed to implement the real-time bidding. These virtual thermostats received
the actual temperature of the house as well as information about the real-time market average price and price distribution and the consumer’s preferences for set point and comfort/economy trade-off setting. This allowed the virtual thermostats to calculate the appropriate bid every five minutes based on the changing temperature and market price of energy.

The real-time market in the project was implemented as a shadow market – that is, rather than change the actual utility billing structure, the project implemented a parallel billing system and a real-time market. Consumers still received their normal utility bill each month, but in addition they received an online bill from the shadow market. This additional bill was paid from a debit account that used funds seeded by the project based on historical energy consumption information for the consumer.

The objective was to provide an economic incentive to consumers to be more price responsive. This was accomplished by allowing the consumers to keep the remaining balance in the debit account at the end of each quarter. Those consumers who were most responsive were estimated to receive about $150 at the end of the quarter.

The market in the project cleared every five minutes, having received demand response bids, distributed generation bids and a base supply bid based on the supply capacity and wholesale price of energy in the Mid-Columbia system operated by Bonneville Power Administration. (This was accomplished through a Dow Jones feed of the Mid-Columbia price and other information sources for capacity.) The market operation required project assets to submit bids every five minutes into the market, and then respond to the cleared price at the end of the five-minute market cycle. In the case of residential space heating in real-time price contract homes, the virtual thermostats adjusted the temperature set point every five minutes; however, in most cases the adjustment was negligible (for example, one-tenth of a degree) if the price was stable.

KEY FINDINGS

Distribution constraint management. As one of the primary objectives of the experiment, distribution constraint management was successfully accomplished. The distribution feeder-imported capacity was managed through demand response automation to a cap of 750 KW for all but one five-minute market cycle during the project year. In addition, distributed generation was dispatched as needed during the project, up to a peak of about 350 KW.

During one period of about 40 hours that took place from Oct. 30, 2006, to Nov. 1, 2006, the system successfully constrained the feeder import capacity at its limit and dispatched distributed generation several times, as shown in Figure 1. In this figure, actual demand under real-time price control is shown in red, while the blue line depicts what demand would have been without real-time price control. It should be noted that the red demand line steps up and down above the feeder capacity line several times during the event – this is the result of distributed generation units being dispatched and removed as their bid prices are met or not.

Market-based control demonstrated. The project controlled both heating and cooling loads, which showed a surprisingly significant shift in energy consumption. Space conditioning loads in real-time price contract homes demonstrated a significant shift to early morning hours – a shift that occurred during both constrained and unconstrained feeder conditions but was more pronounced during constrained periods. This is similar to what one would expect in preheating or precooling systems, but neither the real nor the virtual thermostats in the project had any explicit prediction capability. The analysis showed that the diurnal shape of the price curve itself caused the effect.

Peak load reduced. The project’s realtime price control system both deferred and shifted peak load very effectively. Unlike the time-of-use system, the realtime price control system operated at a fine level of precision, responding only when constraints were present and resulting in a precise and proportionally appropriate level of response. The time-of-use system, on the other hand, was much coarser in its response and responded regardless of conditions on the grid, since it was only responding to preconfiured time schedules or manually initiated critical peak price signals.

Internet-based control demonstrated. Bids and control of the distributed energy resources in the project were implemented over Internet connections. As an example, the residential thermostats modified their operation through a combination of local and central control communicated as asynchronous events over the Internet. Even in situations of intermittent communication failure, resources typically performed well in default mode until communications could be re-established. This example of the resilience of a well-designed, loosely coupled distributed control application schema is an important aspect of what the project demonstrated.

Distributed generation served as a valuable resource. The project was highly effective in using the distributed generation units, dispatching them many times over the duration of the experiment. Since the diesel generators were restricted by environmental licensing regulations to operate no more than 100 hours per year, the bid calculation factored in a sliding scale price premium such that bids would become higher as the cumulative runtime for the generators increased toward 100 hours.

CONCLUSION

The Olympic Peninsula Project was unique in many ways. It clearly demonstrated the value of the GridWise concepts of leveraging information technology and incorporating market constructs to manage distributed energy resources. Local marginal price signals as implemented through the market clearing process, and the overall event-based software integration framework successfully managed the bidding and dispatch of loads and balanced the issues of wholesale costs, distribution congestion and customer needs in a very natural fashion.

The final report (as well as background material) on the project is available at www.gridwise.pnl.gov. The report expands on the remarks in this article and provides detailed coverage of a number of important assertions supported by the project, including:

  • Market-based control was shown to be a viable and effective tool for managing price-based responses from single-family premises.
  • Peak load reduction was successfully accomplished.
  • Automation was extremely important in obtaining consistent responses from both supply and demand resources.
  • The project demonstrated that demand response programs could be designed by establishing debit account incentives without changing the actual energy prices offered by energy providers.

Although technological challenges were identified and noted, the project found no fundamental obstacles to implementing similar systems at a much larger scale. Thus, it’s hoped that an opportunity to do so will present itself at some point in the near future.

Trilliant: Advanced Metering Infrastructure Solutions for Utilities and Green Energy Markets

Trilliant Incorporated provides wireless network solutions and software for advanced metering, demand response, smart grid and submetering. With more than 20 years’ experience solving utility meter communications needs, the company empowers flexibility and choice through the adoption and integration of open standards-based technologies.

ADVANCED METERING

Trilliant SecureMesh™ AMI solutions enable utilities to introduce services and programs such as time-of-use (TOU) metering, CIS initiated real-time meter reads and customer disconnect/ reconnect. These programs are transforming the traditional customer-utility relationship through interval-based consumption data and two-way messaging, resulting in reduced operational costs and improved reliability.

DEMAND RESPONSE

Many utilities are initiating smart metering and AMI programs with a primary goal of ad dressing critical peak demand challenges using TOU pricing, critical peak pricing and demand response programs. Trilliant is the first AMI supplier to provide an open standards-based platform for AMI-integrated demand response (i.e., load control) incorporating smart thermostats – and thus air conditioning equipment – and other loads such as pool pumps and water heaters. The Trilliant Demand Response solution also supports in-premise (“in-home”) displays that offer consumers real-time information on energy usage and utility-initiated messages.

SMART GRID

By leveraging Smart Grid solutions from Trilliant, utilities can realize dramatic improvements in system performance and cost. System operational challenges such as outage detection and restoration verification are supported through a combination of network-based intelligence and operations center applications. Trilliant’s Smart Grid solutions enable operations to more effectively identify faults and rapidly restore service on the basis of real-time readings of on-premise conditions. These offerings may also be integrated with extended enterprise systems supporting the mobile field force. Smart Grid solutions from Trilliant provide the foundation for advanced applications such as utility asset life cycle management and others that can benefit from the use of actual loading data.

SUBMETERING

Trilliant Energy Services offerings include turnkey submetering solutions, utility data profiling and online presentment to meet the needs of electric and natural gas utilities. Because Trilliant is an expert in energy technology the company’s solutions offer benefits to all stakeholders – from condo developers and corporations to owners and managers and directly to residential suite owners.

Ontario Pilot

Smart metering technologies are making it possible to provide residential utility customers with the sophisticated “smart pricing” options once available only to larger commercial and industrial customers. When integrated with appropriate data manipulation and billing systems, smart metering systems can enable a number of innovative pricing and service regimes that shift or reduce energy consumption.

In addition, by giving customers ready access to up-to-date information about their energy demand and usage through a more informative bill, an in-home display monitor or an enhanced website, utilities can supplement smart pricing options and promote further energy conservation.

SMART PRICES

Examples of smart pricing options include:

  • Time-of-use (TOU) is a tiered system where price varies consistently by day or time of day, typically with two or three price levels.
  • Critical peak pricing (CPP) imposes dramatically higher prices during specific days or hours in the year to reflect the actual or deemed price of electricity at that time.
  • Critical peak rebate (CPR) programs enable customers to receive rebates for using less power during specific periods.
  • Hourly pricing allows energy prices to change on an hourly basis in conformance with market prices.
  • Price adjustments reflect customer participation in load control, distributed generation or other programs.

SMART INFORMATION

Although time-sensitive pricing is designed primarily to reduce peak demand, these programs also typically result in a small reduction in overall energy consumption. This reduction is caused by factors independent of the primary objective of TOU pricing. These factors include the following:

  • Higher peak pricing causes consumers to eliminate, rather than merely delay, activities or habits that consume energy. Some of the load reductions that higher peak or critical peak prices produce are merely shifted to other time periods. For example, consumers do not stop doing laundry; they simply switch to doing it at non-peak times. In these cases the usage is “recovered.” Other load reductions, such as those resulting from consumers turning off lights or lowering heat, are not recovered, thus reducing the household’s total electricity consumption.
  • Dynamic pricing programs give participants a more detailed awareness of how they use electricity, which in turn results in lower consumption.
  • These programs usually increase the amount of usage information or feedback received by the customer, which also encourages lower consumption.

The key challenge for utilities and policy makers comes in deciding which pricing and communications structures will most actively engage their customers and drive the desired conservation behaviors. Studies show that good customer feedback on energy usage can reduce total consumption by 5 to 10 percent. Smart meters let customers readily access more up-to-date information about their hourly, daily and monthly energy usage via in-home displays, websites and even monthly bill inserts.

The smart metering program undertaken by the province of Ontario, Canada, presents one approach and serves as a useful example for utility companies contemplating similar deployments.

ONTARIO’S PROGRAM

In 2004, anticipating a serious energy generation shortfall in coming years, the government of Ontario announced plans to have smart electricity meters installed in 800,000 homes and small businesses by the end of 2007, and throughout Ontario by 2010. The initiative will affect approximately 4.5 million customers.

As the regulator of Ontario’s electricity industry, the Ontario Energy Board (OEB) was responsible for designing the smart prices that would go with these smart meters. The plan was to introduce flexible, time-of-use electricity pricing to encourage conservation and peak demand shifting. In June 2006, the OEB commissioned IBM to manage a pilot program that would help determine the best structure for prices and the best ways to communicate these prices.

By Aug. 1, 2006, 375 residential customers in the Ottawa area of Ontario had been recruited into a seven-month pilot program. Customers were promised $50 as an incentive for remaining on the pilot for the full period and $25 for completing the pilot survey.

Pilot participants continued to receive and pay their “normal” bimonthly utility bills. Separately, participants received monthly electricity usage statements that showed their electricity supply charges on their respective pilot price plan, as illustrated in Figure 1. Customers were not provided with any other new channels for information, such as a website or in-home display.

A control group that continued being billed at standard rates was also included in the study. Three pricing structures were tested in the pilot, with 125 customers in each group:

  • Time-of-use (TOU). Ontario’s TOU pricing includes off-peak, mid-peak and peak prices that changed by winter and summer season.
  • TOU with CPP. Customers were notified a day in advance that the price of the electricity commodity (not delivery) for three or four hours the next day would increase to 30 cents per kilowatt hour (kWh) – nearly six times the average TOU price. Seven critical peak events were declared during the pilot period – four in summer and three in winter. Figure 2 shows the different pricing levels.
  • TOU with CPR. During the same critical peak hours as CPP, participants were provided a rebate for reductions below their “baseline” usage. The base was calculated as the average usage for the same hours of the five previous nonevent, non-holiday weekdays, multiplied by 125 percent.

The results from the Ontario pilot clearly demonstrate that customers want to be engaged and involved in their energy service and use. Consider the following:

  • Within the first week, and before enrollment was suspended, more than 450 customers responded to the invitation letter and submitted requests to be part of the pilot – a remarkable 25 percent response rate. In subsequent focus groups, participants emphasized a desire to better monitor their own electricity usage and give the OEB feedback on the design of the pricing. These were in fact the primary reasons cited for enrolling in the pilot.
  • In comparison to the control group, total load shifting during the four summertime critical peak periods ranged from 5.7 percent for TOU-only participants to 25.4 percent for CPP participants.
  • By comparing the usage of the treatment and control groups before and during the pilot, a substantial average conservation effect of 6 percent was recorded across all customers.
  • Over the course of the entire pilot period, on average, participants shifted consumption and paid 3 percent, or $1.44, less on monthly bills with the TOU pilot prices, compared with what they would have paid using the regular electricity prices charged by their utility. Of all participants, 75 percent saved money on TOU prices. Figure 3 illustrates the distribution of savings.
  • When this shift in consumption was combined with the reduction in customers’ overall consumption, a total average monthly savings of more than $4 resulted. From this perspective, 93 percent of customers would pay less on the TOU prices over the course of the pilot program than they would have with the regular electricity prices charged by their utility.
  • Citing greater control of their energy costs and benefits to the environment, 7 percent of participants surveyed said they would recommend TOU pricing to their friends.

There were also some unexpected results. For instance, there was no pattern of customers shifting demand away from the dinnertime peak period in winter. In addition, TOU-only pricing alone did not result in a statistically significant shifting of power away from peak periods.

CONCLUSION

In summary, participants in the Ontario Energy Board’s pilot program approved of these smarter pricing structures, used less energy overall, shifted consumption from peak periods in the summertime and, as a result, most paid less on their utility bills.

Over the next decade, as the utility industry evolves to the intelligent utility network and smart metering technologies are deployed to all customers, utilities will have many opportunities to implement new electricity pricing structures. This transition will represent a considerable technical challenge, testing the limits of the latest communications, data management, engineering, metering and security technologies.

But the greater challenge may come from customers. Much of the benefit from smart metering is directly tied to real, measurable and predictable changes in how customers use energy and interact with their utility provider. Capturing this benefit requires successful manipulation of the complex interactions of economic incentives, consumer behavior and societal change. Studies such as the OEB Smart Pricing Pilot provide another step in penetrating this complexity, helping the utility industry better understand how customers react and interact with these new approaches.