How Intelligent Is Your Grid?

Many people in the utility industry see the intelligent grid — an electric transmission and distribution network that uses information technology to predict and adjust to network changes — as a long-term goal that utilities are still far from achieving. Energy Insights research, however, indicates that today’s grid is more intelligent than people think. In fact, utilities can begin having the network of the future today by better leveraging their existing resources and focusing on the intelligent-grid backbone.

DRIVERS FOR THE INTELLIGENT GRID

Before discussing the intelligent grid backbone, it’s important to understand the drivers directing the intelligent grid’s progress. While many groups — such as government, utilities and technology companies — may be pushing the intelligent grid forward, they are also slowing it down. Here’s how:

  • Government. With the 2005 U.S. Energy Policy Act and the more recent 2007 Energy Independence and Security Act, the federal government has acknowledged the intelligent grid’s importance and is supporting investment in the area. Furthermore, public utility commissions (PUCs) have begun supporting intelligent grid investments like smart metering. At the same time, however, PUCs have a duty to maintain reasonable prices. Since utilities have not extensively tested the benefits of some intelligent grid technologies, such as distribution line sensors, many regulators hesitate to support utilities investing in intelligent grid technologies beyond smart metering.
  • Utilities. Energy Insights research indicates that information technology, in general, enables utilities to increase operational efficiency and reduce costs. For this reason, utilities are open to information technology; however, they’re often looking for quick cost recovery and benefits. Many intelligent grid technologies provide longer-term benefits, making them difficult to cost-justify over the short term. Since utilities are risk-aware, this can make intelligent grid investments look riskier than traditional information technology investments.
  • Technology. Although advanced enough to function on the grid today, many intelligent grid technologies could become quickly outdated thanks to the rapidly developing marketplace. What’s more, the life span of many intelligent grid technologies is not as long as those of traditional grid assets. For example, a smart meter’s typical life span is about 10 to 15 years, compared with 20 to 30 years for an electro-mechanical meter.

With strong drivers and competing pressures like these, it’s not a question of whether the intelligent grid will happen but when utilities will implement new technologies. Given the challenges facing the intelligent grid, the transition will likely be more of an evolution than a revolution. As a result, utilities are making their grids more intelligent today by focusing on the basics, or the intelligent grid backbone.

THE INTELLIGENT GRID BACKBONE

What comprises this backbone? Answering this question requires a closer look at how intelligence changes the grid. Typically, a utility has good visibility into the operation of its generation and transmission infrastructure but poor visibility into its distribution network. As a result, the utility must respond to a changing distribution network based on very limited information. Furthermore, if a grid event requires attention — such as in the case of a transformer failure — people must review information, decide to act and then manually dispatch field crews. This type of approach translates to slower, less informed reactions to grid events.

The intelligent grid changes these reactions through a backbone of technologies — sensors, communication networks and advanced analytics — especially developed for distribution networks. To better understand these changes, we can imagine a scenario where a utility has an outage on its distribution network. As shown in Figure 1, additional grid sensors collect more information, making it easier to detect problems. Communications networks then allow sensors to convey the problem to the utility. Advanced analytics can efficiently process this information and determine more precisely where the fault is located, as well as automatically respond to the problem and dispatch field crews. These components not only enable faster, better-informed reactions to grid problems, they can also do real-time pricing, improve demand response and better handle distributed and renewable energy sources.

A CLOSER LOOK AT BACKBONE COMPONENTS

A deeper dive into each of these intelligent grid backbone technologies reveals how utilities are gaining more intelligence about their grid today.

Network sensors are important not only for real-time operations — such as locating faults and connecting distributed energy sources to the grid — but also for providing a rich historical data source to improve asset maintenance and load research and forecasting. Today, more utilities are using sensors to better monitor their distribution networks; however, they’re focused primarily on smart meters. The reason for this is that smart meters have immediate operational benefits that make them attractive for many utilities today, including reducing meter reader costs, offering accurate billing information, providing theft control and satisfying regulatory requirements. Yet this focus on smart meters has created a monitoring gap between the transmission network and the smart meter.

A slew of sensors are available from companies such as General Electric, ABB, PowerSense, GridSense and Serveron to fill this monitoring gap. Tracking everything from load balancing and transformer status to circuit breakers and tap changers, energized downed lines, high-impedance faults and stray voltage, and more, these sensors are able to fill the monitoring gap, yet utilities hesitate to invest in them because they lack the immediate operational benefits of smart meters.

By monitoring this gap, however, utilities will sustain longer-term grid benefits such as reduced generation capacity building. Utilities have found they can begin monitoring this gap by:

  • Prioritizing sensor investments. Customer complaints and regulatory pressure have pushed some utilities to take action for particular parts of their service territory. For example, one utility Energy Insights studied received numerous customer complaints about a particular feeder’s reliability, so the utility invested in line sensors for that area. Another utility began considering sensor investments in troubled areas of its distribution network when regulators demanded that the utility raise its System Average Interruption Frequency Index (SAIFI) and System Average Interruption Duration Index (SAIDI) ratings from the bottom 50 percent to the top 25 percent of benchmarked utilities. By focusing on such areas, utilities can achieve “quick wins” with sensors and build utility confidence by using additional sensors on their distribution grid.
  • Realizing it’s all about compromise. Even in high-priority areas, it may not make financial sense for a utility to deploy the full range of sensors for every possible asset. In some situations, utilities may target a particular area of the service territory with a higher density of sensors. For example, a large U.S. investor-owned utility with a medium voltage-sensing program placed a high density of sensors along a specific section of its service territory. On the other hand, utilities might cover a broader area of service territory with fewer sensors, similar to the approach taken by a large investor-owned utility Energy Insights looked at that monitored only transformers across its service territory.
  • Rolling in sensors with other intelligent grid initiatives. Some utilities find ways to combine their smart metering projects with other distribution network sensors or to leverage existing investments that could support additional sensors. One utility that Energy Insights looked at installed transformer sensors along with a smart meter initiative and leveraged the communications networks it used for smart metering.

While sensors provide an important means of capturing information about the grid, communication networks are critical to moving that information throughout the intelligent grid — whether between sensors or field crews. Typically, to enable intelligent grid communications, utilities must either build new communications networks to bring intelligence to the existing grid or incorporate communication networks into new construction. Yet utilities today are also leveraging existing or recently installed communications networks to facilitate more sophisticated intelligent grid initiatives such as the following:

  • Smart metering and automated meter-reading (AMR) initiatives. With the current drive to install smart meters, many utilities are covering their distribution networks with communications infrastructure. Furthermore, existing AMR deployments may include communications networks that can bring data back to the utility. Some utilities are taking advantage of these networks to begin plugging other sensors into their distribution networks.
  • Mobile workforce. The deployment of mobile technologies for field crews is another hot area for utilities right now. Utilities are deploying cellular networks for field crew communications for voice and data. Although utilities have typically been hesitant to work with third-party communications providers, they’ve become more comfortable with outside providers after using them for their mobile technologies. Since most of the cellular networks can provide data coverage as well, some utilities are beginning to use these providers to transmit sensor information across their distribution networks.

Since smart metering and mobile communications networks are already in place, the incremental cost of installing sensors on these networks is relatively low. The key is making sure that different sensors and components can plug into these networks easily (for example, using a standard communications protocol).

The last key piece of the intelligent grid backbone is advanced analytics. Utilities are required to make quick decisions every day if they’re to maintain a safe and reliable grid, and the key to making such decisions is being well informed. Intelligent grid analytics can help utilities quickly process large amounts of data from sensors so that they can make those informed decisions. However, how quickly a decision needs to be made depends on the situation. Intelligent grid analytics assist with two types of decisions: very quick decisions (veQuids) and quick decisions (Quids). veQuids are made in milliseconds by computers and intelligent devices analyzing complex, real-time data – an intelligent grid vision that’s still a future development for most utilities.

Fortunately, many proactive decisions about the grid don’t have to be made in milliseconds. Many utilities today can make Quids — often manual decisions — to predict and adjust to network changes within a time frame of minutes, days or even months.

no matter how quick the decision, however, all predictive efforts are based on access to good-quality data. In putting their Quid capabilities to use today — in particular for predictive maintenance and smart metering — utilities are building not only intelligence about their grids but also a foundation for providing more advanced veQuids analytics in the future through the following:

  • The information foundation. Smart metering and predictive maintenance require utilities to collect not only more data but also more real-time data. Smart metering also helps break down barriers between retail and operational data sources, which in turn creates better visibility across many data sources to provide a better understanding of a complex grid.
  • The automation transition. To make the leap between Quids and veQuids requires more than just better access to more information — it also requires automation. While fully automated decision-making is still a thing of the future, many utilities are taking steps to compile and display data automatically as well as do some basic analysis, using dashboards from providers such as OSIsoft and Obvient Strategies to display high-level information customized for individual users. The user then further analyzes the data, and makes decisions and takes action based on that analysis. Many utilities today use the dashboard model to monitor critical assets based on both real-time and historical data.

ENSURING A MORE INTELLIGENT GRID TODAY AND TOMORROW

As these backbone components show, utilities already have some intelligence on their grids. now, they’re building on that intelligence by leveraging existing infrastructure and resources — whether it’s voice communications providers for data transmission or Quid resources to build a foundation for the veQuids of tomorrow. In particular, utilities need to look at:

  • Scalability. Utilities need to make sure that whatever technologies they put on the grid today can grow to accommodate larger portions of the grid in future.
  • Flexibility. Given rapid technology changes in the marketplace, utilities need to make sure their technology is flexible and adaptable. For example, utilities should consider smart meters that have the ability to change out communications cards to allow for new technologies.
  • Integration. due to the evolutionary nature of the grid, and with so many intelligent grid components that must work together (intelligent sensors at substations, transformers and power lines; smart meters; and distributed and renewable energy sources), utilities need to make sure these disparate components can work with one another. Utilities need to consider how to introduce more flexibility into their intelligent grids to accommodate the increasingly complex network of devices.

As today’s utilities employ targeted efforts to build intelligence about the grid, they must keep in mind that whatever action they take today – no matter how small – must ultimately help them meet the demands of tomorrow.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

The Customer-Focused Utility

THE CHANGING DYNAMICS OF CUSTOMER RELATIONSHIPS

The utilities industry is in transition. External factors – including shifts in governmental policies, a globally felt sense of urgency about conserving energy, advances in power generation techniques and new technologies – are driving massive changes throughout the industry. Utilities are also under internal pressure to prevent profit margins from eroding. But most significantly, utilities must evolve to compete in a marketplace where consumers increasingly expect high-quality customer service and believe that no company deserves their unconditional loyalty if it cannot perform to expectations. These pressures are putting many utility providers into seriously competitive, market-driven situations where the customer experience becomes a primary differentiator.

In the past, utility companies had very limited interactions with customers. Apart from opening new accounts and billing for services, the relationship was remote, with customers giving no more thought to their power provider than they would to finding a post office. Consumers were indifferent to greenhouse gas (GHG) emissions and essentially took a passive view of all utility functions, only contacting the utility if their lights temporarily went out.

In contrast, the utility of the future can expect a much more intense level of customer involvement. If utilities embrace programs to change customers’ behaviors – for example, by implementing time-of-use rates – customers will need more information on a timelier basis in order to make educated decisions. In addition, customers will expect higher levels of service to keep up with changes in the rest of the commercial world. As consumers get used to checking their bank account and credit card balances via mobile devices, they’ll soon expect the same from all similar services, including their utility company. As younger consumers (Generation Y and now Generation Z) begin their relationships with utilities, they bring expectations of a digital, mobile and collaborative customer service experience. Taking a broader perspective, most age segments – even baby boomers – will begin demanding these new multichannel experiences at times that are convenient for them.

The most significant industry shifts will alter the level of interaction between the utility grid and the home. In the past, this was a one-way street; in the future, however, more households will be adopting “participatory generation” due to their increased use of renewable energy. This will require a more sophisticated home/ grid relationship, in order to track the give and take of power between consumers as both users and generators. This shift will likely change the margin equation for most utility companies.

Customer Demands Drive Technology Change; Technology Change Drives Customer Demand

Utilities are addressing these and other challenges by implementing new business models that are supported by new technologies. The most visible – and arguably the most important – of the new technologies are advanced metering infrastructure (AMI) and the technical components of the smart grid, which integrates AMI with distribution automation and other technologies to connect a utility’s equipment, devices, systems, customers, partners and employees. The integration of these technologies with customer information systems (CIS) and other customer relationship management (CRM) tools will increase consumer control of energy expenditures. Most companies in the industry will need to shift away from the “ratepayer” approach they currently use to serve residential and small business customers, and adapt to changing consumer behavior and emerging business models enabled by new network and generation technologies.

Impacts on the Customer Experience

There are multiple paths to smart grid deployment, all of which utility firms have employed to leverage new sources of data on power demand. If we consider a gradual transformation from today’s centralized, one-way view to a network that is both distributed and dynamic, we can begin to project how technological shifts will impact the utility-consumer relationship, as illustrated in Figure 1.

The future industry value chain for grid-connected customers will have the same physical elements and flow as the current one but be able to provide many more information-oriented elements. Consequently, the shift to a customer-focused view will have serious implications for data management. These include a proliferation of data as well as new mandates for securely tracking, updating, accessing, analyzing and ensuring quality.

In addition, utilities must develop customer experience capabilities in parallel with extending their energy information management capabilities. Taking the smart grid path requires customers to be more involved, as decision-making responsibility shifts more toward the consumer, as depicted in Figure 2.

It’s also important to consider some of the new interactions that consumers will have with their utility company. Some of these will be viewed as “features” of the new technology, whereas others may significantly change how consumers view their relationship with their energy provider. Still others will have a profound impact on how data is captured and deployed within the organization. These interactions may include:

  • Highly detailed, timely and accurate individuated customer information;
  • Interaction between the utility and smart devices – including the meter – in the home (possibly based on customers’ preferences);
  • Seamless, bidirectional, individual communication permitting an extended dialogue across multiple channels such as short message service, integrated voice response, portals and customer care;
  • Rapid (real-time) analysis of prior usage, current usage and prediction of future usage under multiple usage and tariff models;
  • Information presented in a customer-friendly manner;
  • Analytical tools that enable customers to model their consumption behavior and understand the impact of changes on energy cost and carbon footprint;
  • Ability to access and integrate a wide range of external information sources, and present pertinent selections to a customer;
  • Integration of information flow from field operations to the customer call center infrastructure; and
  • Highly skilled, knowledgeable contact center agents who can not only provide accurate information but can advise and recommend products, services, rate plans or changes in consumption profiles.

Do We Need to Begin Thinking About Customers Differently?

Two primary factors will determine the nature of the interface between utilities and consumers in the future. The first is the degree to which consumers will take the initiative in making decisions about the energy supply and their own energy consumption. Second, the amount and percentage of consumers’ disposable income that they allocate to energy will directly influence their consumption and conservation choices, as shown in Figure 3.

How Do Utilities Influence Customers’ Behavior?

One of the major benefits of involving energy customers in generation and consumption decisions is that it can serve to decrease base load. Traditionally, utilities have taken two basic approaches to accomplishing this: coercion and enticement. Coercion is a penalty-based approach for inducing a desired behavior. For example, utilities may charge higher rates for peak period usage, forcing customers to change the hours when they consume power or pay more for peak period usage. The risks of this approach include increased customer dissatisfaction and negative public and regulatory opinion.

Enticement, on the other hand, is an incentive-based approach for driving a desired behavior. For example, utilities could offer cost savings to customers who shift power consumption to off-peak times. The risks associated with this approach include low customer involvement, because incentives may not be enough to overcome the inconvenience to customers.

Both of these approaches have produced results in the past, but neither will necessarily work in the new, more interactive environment. A number of other strategies may prove more effective in the future. For example, customer goal achievement may be one way to generate positive behavior. This model offers benefits to customers by making it easier for them to achieve their own energy consumption or conservation goals. It also gives customers the feeling that they have choices – which promotes a more positive relationship between the customer and the utility. Ease of use represents another factor that influences customer behavior. Companies can accomplish this by creating programs and interfaces that make it simple for the customer to analyze information and make decisions.

There is no “silver bullet” approach to successfully influencing all customers in all utility environments. Often, each customer segment must be treated differently, and each utility company will need to develop a unique customer experience strategy and plan that fits the needs of its unique business situation. The variables will include macro factors such as geography, customer econo-graphics and energy usage patterns; however, they’ll also involve more nuanced attributes such as customer service experiences, customer advocacy attitudes and their individual emotional dispositions.

CONCLUSION

Most utilities considering implementing advanced metering or broader smart grid efforts focus almost exclusively on deploying new technologies. However, they also need to consider customer behavior. Utilities must adopt a new approach that expands the scope of their strategic road map by integrating the “voice of the customer” into the technology planning and deployment process.

By carefully examining a utility customer’s expectations and anticipating the customer impacts brought on by innovative technologies, smart utility companies can get ahead of the customer experience curve, drive more value to the bottom line and ultimately become truly customer focused.

Developing a Customer Value Transformation Road Map

Historically, utility customers have had limited interactions with their electric or gas utilities, except to start or stop service, report outages, and pay bills or resolve billing questions. This situation is changing as the result of factors that include rising energy prices, increasing concerns about the environment and trends toward more customer interaction and control among other service providers – such as cell phone companies. Over the next five to 10 years, we expect utility customers to continue seeking improvements in three key areas:

  • Increased communication with their utility company, through a greater variety of media;
  • Improved understanding of and control over their own energy use; and
  • More accurate and timely information on outage events and service restoration.

Moreover, as the generations that have grown up with cell phones, the Internet, MP3 players and other digital devices move into adulthood, they will expect utilities to keep pace with their own technological sophistication. These new customers will assume that they can customize the nature of their communications with both friends and businesses. Utilities that can provide these capabilities will unlock new sources of revenue and be better able to retain customers when faced with competition.

The intelligent utility network (IUN) will be a key enabler of these new customer capabilities and services. But not all customers will want all of the new capabilities, so utilities need to understand and carefully analyze the value of each among various customer segments. This will require utilities to prepare sound business cases and prioritize their plans for meeting future customer needs.

One of the first initiatives that utilities launching an IUN program should undertake is the development of a “customer value transformation road map.” The road map approach allows utilities to establish the types of capabilities and services that customers will want, to identify and define the gaps in current processes and systems that must be overcome to meet these needs, and to develop plans to close those gaps.

TRANSFORMATION ROAD MAP DEVELOPMENT APPROACH

Our approach for developing the customer value transformation road map includes four tasks, as depicted in Figure 1.

Task 1: Customer Requirements

The primary challenge facing utilities in defining customer requirements is the need to anticipate their desires and preferences at least five to 10 years into the future. Developing this predictive vision can be difficult for managers because they’re often “locked into” their current views of customers, and their expectations are based largely on historical experience. To overcome this, utilities can learn from other industries that are already traveling this path.

The telecommunications providers, as one example, have made substantial progress in meeting evolving customer needs over the last decade. While more changes lie ahead for telecommunications, the industry has significantly enhanced the customer experience, created differentiated capabilities for various customer segments and succeeded in developing many of these capabilities into profit-generating services. This progress can serve as both an inspiration and a guide as utilities start down a similar path.

The first step in defining future customer requirements is to segment the customer base into the various customer groups that are likely to have different needs. Although these segments will likely vary for each utility, we believe that the following seven major customer segments serve as a useful starting point for this work:

  • Residential – tech savvy. These are customers who want many different electronic communication pathways but don’t necessarily want to develop a detailed understanding of the trends and patterns in their energy usage.
  • Residential – low tech. These customers prefer traditional, less high tech ways of communicating, but may want to perform analysis of their usage.
  • Residential – low income. These are customers who want to understand what’s driving their energy expenditures and how to reduce their bills; many of these customers are also tech savvy.
  • Special needs. These customers, often elderly, may live on fixed incomes and are accustomed to careful planning, and want no surprises in their interactions with providers of utility services. They frequently need help from others to manage their daily activities.
  • Small business. These commercial customers are typically very cost-conscious and highly adaptable and seek creative but relatively simple solutions to their energy management challenges.
  • Large commercial. These are customers who are cost-conscious and capable of investing substantial time and money in order to analyze and reduce their energy use in sophisticated ways.
  • Industrial. These very large customers are sophisticated, cost-conscious and increasingly focused on environmental issues.

The next step in defining future customer requirements is to understand the points in the utility value chain at which customers will interact with their utility. Based on recent trends for both utilities and other industries, the following “touch point” areas are a good starting point:

  • Reliability and restoration;
  • Billing;
  • Customer service;
  • Energy information and control; and
  • Environment.

Not all of these requirements will be important to all customer segments. It is essential to establish the most important requirements for each segment and each touch point. Figure 2 provides one example of a preliminary assessment of the relative importance of selected customer requirements for the reliability and restoration category, across the seven specified customer segments. Each customer need is assigned a high (H), medium (M) or low (L) rank.

Once this preliminary assessment is completed, utilities should consider conducting several workshops with participants from various functional departments. The goal of these workshops is to obtain feedback, to evaluate even more thoroughly the importance of each potential requirement and to begin to secure internal acceptance of the customer requirements that are determined to be worth pursuing. Departments that should participate in such workshops include those focused on regulatory requirements, billing, corporate communications, demand-side management, customer operations, complaint resolution and outage management.

One way of making the workshop process more “real” and therefore more effective is to develop customer use scenarios that incorporate each potential requirement. For example, the following billing scenarios could be used to illustrate potential customer requirements and to facilitate more effective evaluation of what will be needed for billing:

  • Billing Scenario 1. I want my gas and electric bills to be unified so that I don’t have to spend extra time making multiple payments. Also, I want the choice of paying my bill electronically, by mail or in person, based on what’s convenient for me, not what’s convenient for my utility.
  • Billing Scenario 2. My parents, who are now retired, receive fixed pension checks, and I want their utility to set up a payment plan for them that results in equal payments over the year, rather than high payments in the summer and low payments in the winter. My parents also want the ability to see a summarized version of their bill in large print, so that they can easily read and understand their energy use and costs.
  • Billing Scenario 3. My kids are on their computer nearly all of the time, and the remainder of the time they seem to be playing their video games. Also, they rarely turn off lights, and all of these things are increasing my energy bills. I want my utility to help me set up a balance limit so that if our energy usage reaches a set level, I’m automatically notified and I have the option of taking some corrective actions. I also expect my meter readings to be accurate rather than simply rough estimates, because I want to understand exactly how much energy I am consuming and what it’s costing me.

In addition to assessing the value of each requirement to customers, it is also important to rank these requirements based on other factors, such as their impacts on the utility. Financial costs and benefits, for example, clearly need to be estimated and considered when evaluating a requirement, regardless of how important the requirement will be to customers. To draw all of these assessments together, it is useful to assign weights to each assessment area – for example, a weight of 35 percent for customer importance, 30 percent for utility costs/benefits and 35 percent for the value that regulators will perceive. Once an appropriate weighting scheme is applied, the utility can rank the requirements and develop a list of those with the highest priority.

Task 2: Gaps

To assess gaps in current capabilities that could prevent a utility from meeting important and valuable customer requirements, the utility should next identify the business processes, organizations and technologies that will “deliver” those requirements. This requires a careful analysis of current and planned process, organizational and technology capabilities, which can be challenging because other initiatives will be affecting these areas even as customer requirements evolve. Moreover, many utilities do not have accurate, detailed documentation of current processes and systems. Therefore, a series of workshops and interviews with functional and technology leaders and staff is necessary. The results of these workshops should be supplemented by analysis of planned systems and process transformations, in order to assess current gaps and to determine whether those gaps will be closed – based on plans that are already in place. If such gaps remain, new projects and capital investments may be required to close
them and to meet expected customer requirements.

During the gap assessment process, it’s critical that the customer value team work closely with other IUN teams to ensure that the customer value gap analysis is coordinated with the broader gap analysis for the IUN program. Important areas to coordinate include automated meter information, demand-side management, outage management and asset management.

Task 3: Business Case Support

While conducting the first two tasks, the assessment team should be able to develop a deep understanding of the costs required to meet the important customer requirements as well as the financial benefits. Because it’s typical to develop consolidated business cases for the IUN, the customer value team should work with the overall IUN business case team to support business case development by bringing this information into the process.

Task 4: Transformation Road Map

This final task builds on an understanding of both the customer requirements and the gaps in current operations to create the customer value transformation road map. The initiatives in the road map will typically be defined across the following primary areas:

  • Process;
  • Technology;
  • Performance metrics;
  • Organization and training; and
  • Project management.

For each of these areas, the road map will establish the timing and sequence of initiatives to close the gaps, based on:

  • The utility’s strategic priorities and capacity for change;
  • Linkages to the utility’s overall IUN transformation plans; and
  • Technology dependencies and links to other work areas.
  • Figure 3 provides a summary of the initiatives from a typical customer value transformation road map. The detail behind this summary provides a path to transforming the customer-related operations to meet expected customer requirements over the next five to 10 years.

    CONCLUSION

    Our “customer value transformation road map” approach provides utilities with a structured process for identifying, assessing and prioritizing future customer requirements. Utilities that are successful in developing such a road map will be better prepared to build customer needs into their overall IUN transformation plans. These companies will in turn increase the likelihood that their IUN transformation will improve customer satisfaction, reduce customer care costs and lead to new sources of revenue.

The GridWise Olympic Peninsula Project

The Olympic Peninsula Project consisted of a field demonstration and test of advanced price signal-based control of distributed energy resources (DERs). Sponsored by the U.S. Department of Energy (DOE) and led by the Pacific Northwest National Laboratory, the project was part of the Pacific Northwest Grid- Wise Testbed Demonstration.

Other participating organizations included the Bonneville Power Administration, Public Utility District (PUD) #1 of Clallam County, the City of Port Angeles, Portland General Electric, IBM’s T.J. Watson Research Center, Whirlpool and Invensys Controls. The main objective of the project was to convert normally passive loads and idle distributed generation into actively participating resources optimally coordinated in near real time to reduce stress on the local distribution system.

Planning began in late 2004, and the bulk of the development work took place in 2005. By late 2005, equipment installations had begun, and by spring 2006, the experiment was fully operational, remaining so for one full year.

The motivating theme of the project was based on the GridWise concept that inserting intelligence into electric grid components at every point in the supply chain – from generation through end-use – will significantly improve both the electrical and economic efficiency of the power system. In this case, information technology and communications were used to create a real-time energy market system that could control demand response automation and distributed generation dispatch. Optimal use of the DER assets was achieved through the market, which was designed to manage the flow of power through a constrained distribution feeder circuit.

The project also illustrated the value of interoperability in several ways, as defined by the DOE’s GridWise Architecture Council (GWAC). First, a highly heterogeneous set of energy assets, associated automation controls and business processes was composed into a single solution integrating a purely economic or business function (the market-clearing system) with purely physical or operational functions (thermostatic control of space heating and water heating). This demonstrated interoperability at the technical and informational levels of the GWAC Interoperability Framework (www.gridwiseac.org/about/publications.aspx), providing an ideal example of a cyber-physical-business system. In addition, it represents an important class of solutions that will emerge as part of the transition to smart grids.

Second, the objectives of the various asset owners participating in the market were continuously balanced to maintain the optimal solution at any point in time. This included the residential demand response customers; the commercial and municipal entities with both demand response and distributed generation; and the utilities, which demonstrated interoperability at the organizational level of the framework.

PROJECT RESOURCES

The following energy assets were configured to respond to market price signals:

  • Residential demand response for electric space and water heating in 112 single-family homes using gateways connected by DSL or cable modem to provide two-way communication. The residential demand response system allowed the current market price of electricity to be presented to customers. Consumers could also configure their demand response automation preferences. The residential consumers were evenly divided among three contract types (fixed, time of use and real time) and a fourth control group. All electricity consumption was metered, but only the loads in price-responsive homes were controlled by the project (approximately 75 KW).
  • Two distributed generation units (175 KW and 600 KW) at a commercial site served the facility’s load when the feeder supply was not sufficient. These units were not connected in parallel to the grid, so they were bid into the market as a demand response asset equal to the total load of the facility (approximately 170 KW). When the bid was satisfied, the facility disconnected from the grid and shifted its load to the distributed generation units.
  • One distributed microturbine (30 KW) that was connected in parallel to the grid. This unit was bid into the market as a generation asset based on the actual fixed and variable expenses of running the unit.
  • Five 40-horsepower (HP) water pumps distributed between two municipal water-pumping stations (approximately 150 KW of total nameplate load). The demand response load from these pumps was incrementally bid into the market based on the water level in the pumped storage reservoir, effectively converting the top few feet of the reservoir capacity into a demand response asset on the electrical grid.

Monitoring was performed for all of these resources, and in cases of price-responsive contracts, automated control of demand response was also provided. All consumers who employed automated control were able to temporarily disable or override project control of their loads or generation units. In the residential realtime price demand response homes, consumers were given a simple configuration choice for their space heating and water heating that involved selecting an ideal set point and a degree of trade-off between comfort and price responsiveness.

For real-time price contracts, the space heater demand response involved automated bidding into the market by the space heating system. Since the programmable thermostats deployed in the project didn’t support real-time market bidding, IBM Research implemented virtual thermostats in software using an event-based distributed programming prototype called Internet- Scale Control Systems (iCS). The iCS prototype is designed to support distributed control applications that span virtually any underlying device or business process through the definition of software sensor, actuator and control objects connected by an asynchronous event programming model that can be deployed on a wide range of underlying communication and runtime environments. For this project, virtual thermostats were defined that conceptually wrapped the real thermostats and incorporated all of their functionality while at the same time providing the additional functionality needed to implement the real-time bidding. These virtual thermostats received
the actual temperature of the house as well as information about the real-time market average price and price distribution and the consumer’s preferences for set point and comfort/economy trade-off setting. This allowed the virtual thermostats to calculate the appropriate bid every five minutes based on the changing temperature and market price of energy.

The real-time market in the project was implemented as a shadow market – that is, rather than change the actual utility billing structure, the project implemented a parallel billing system and a real-time market. Consumers still received their normal utility bill each month, but in addition they received an online bill from the shadow market. This additional bill was paid from a debit account that used funds seeded by the project based on historical energy consumption information for the consumer.

The objective was to provide an economic incentive to consumers to be more price responsive. This was accomplished by allowing the consumers to keep the remaining balance in the debit account at the end of each quarter. Those consumers who were most responsive were estimated to receive about $150 at the end of the quarter.

The market in the project cleared every five minutes, having received demand response bids, distributed generation bids and a base supply bid based on the supply capacity and wholesale price of energy in the Mid-Columbia system operated by Bonneville Power Administration. (This was accomplished through a Dow Jones feed of the Mid-Columbia price and other information sources for capacity.) The market operation required project assets to submit bids every five minutes into the market, and then respond to the cleared price at the end of the five-minute market cycle. In the case of residential space heating in real-time price contract homes, the virtual thermostats adjusted the temperature set point every five minutes; however, in most cases the adjustment was negligible (for example, one-tenth of a degree) if the price was stable.

KEY FINDINGS

Distribution constraint management. As one of the primary objectives of the experiment, distribution constraint management was successfully accomplished. The distribution feeder-imported capacity was managed through demand response automation to a cap of 750 KW for all but one five-minute market cycle during the project year. In addition, distributed generation was dispatched as needed during the project, up to a peak of about 350 KW.

During one period of about 40 hours that took place from Oct. 30, 2006, to Nov. 1, 2006, the system successfully constrained the feeder import capacity at its limit and dispatched distributed generation several times, as shown in Figure 1. In this figure, actual demand under real-time price control is shown in red, while the blue line depicts what demand would have been without real-time price control. It should be noted that the red demand line steps up and down above the feeder capacity line several times during the event – this is the result of distributed generation units being dispatched and removed as their bid prices are met or not.

Market-based control demonstrated. The project controlled both heating and cooling loads, which showed a surprisingly significant shift in energy consumption. Space conditioning loads in real-time price contract homes demonstrated a significant shift to early morning hours – a shift that occurred during both constrained and unconstrained feeder conditions but was more pronounced during constrained periods. This is similar to what one would expect in preheating or precooling systems, but neither the real nor the virtual thermostats in the project had any explicit prediction capability. The analysis showed that the diurnal shape of the price curve itself caused the effect.

Peak load reduced. The project’s realtime price control system both deferred and shifted peak load very effectively. Unlike the time-of-use system, the realtime price control system operated at a fine level of precision, responding only when constraints were present and resulting in a precise and proportionally appropriate level of response. The time-of-use system, on the other hand, was much coarser in its response and responded regardless of conditions on the grid, since it was only responding to preconfiured time schedules or manually initiated critical peak price signals.

Internet-based control demonstrated. Bids and control of the distributed energy resources in the project were implemented over Internet connections. As an example, the residential thermostats modified their operation through a combination of local and central control communicated as asynchronous events over the Internet. Even in situations of intermittent communication failure, resources typically performed well in default mode until communications could be re-established. This example of the resilience of a well-designed, loosely coupled distributed control application schema is an important aspect of what the project demonstrated.

Distributed generation served as a valuable resource. The project was highly effective in using the distributed generation units, dispatching them many times over the duration of the experiment. Since the diesel generators were restricted by environmental licensing regulations to operate no more than 100 hours per year, the bid calculation factored in a sliding scale price premium such that bids would become higher as the cumulative runtime for the generators increased toward 100 hours.

CONCLUSION

The Olympic Peninsula Project was unique in many ways. It clearly demonstrated the value of the GridWise concepts of leveraging information technology and incorporating market constructs to manage distributed energy resources. Local marginal price signals as implemented through the market clearing process, and the overall event-based software integration framework successfully managed the bidding and dispatch of loads and balanced the issues of wholesale costs, distribution congestion and customer needs in a very natural fashion.

The final report (as well as background material) on the project is available at www.gridwise.pnl.gov. The report expands on the remarks in this article and provides detailed coverage of a number of important assertions supported by the project, including:

  • Market-based control was shown to be a viable and effective tool for managing price-based responses from single-family premises.
  • Peak load reduction was successfully accomplished.
  • Automation was extremely important in obtaining consistent responses from both supply and demand resources.
  • The project demonstrated that demand response programs could be designed by establishing debit account incentives without changing the actual energy prices offered by energy providers.

Although technological challenges were identified and noted, the project found no fundamental obstacles to implementing similar systems at a much larger scale. Thus, it’s hoped that an opportunity to do so will present itself at some point in the near future.

Ontario Pilot

Smart metering technologies are making it possible to provide residential utility customers with the sophisticated “smart pricing” options once available only to larger commercial and industrial customers. When integrated with appropriate data manipulation and billing systems, smart metering systems can enable a number of innovative pricing and service regimes that shift or reduce energy consumption.

In addition, by giving customers ready access to up-to-date information about their energy demand and usage through a more informative bill, an in-home display monitor or an enhanced website, utilities can supplement smart pricing options and promote further energy conservation.

SMART PRICES

Examples of smart pricing options include:

  • Time-of-use (TOU) is a tiered system where price varies consistently by day or time of day, typically with two or three price levels.
  • Critical peak pricing (CPP) imposes dramatically higher prices during specific days or hours in the year to reflect the actual or deemed price of electricity at that time.
  • Critical peak rebate (CPR) programs enable customers to receive rebates for using less power during specific periods.
  • Hourly pricing allows energy prices to change on an hourly basis in conformance with market prices.
  • Price adjustments reflect customer participation in load control, distributed generation or other programs.

SMART INFORMATION

Although time-sensitive pricing is designed primarily to reduce peak demand, these programs also typically result in a small reduction in overall energy consumption. This reduction is caused by factors independent of the primary objective of TOU pricing. These factors include the following:

  • Higher peak pricing causes consumers to eliminate, rather than merely delay, activities or habits that consume energy. Some of the load reductions that higher peak or critical peak prices produce are merely shifted to other time periods. For example, consumers do not stop doing laundry; they simply switch to doing it at non-peak times. In these cases the usage is “recovered.” Other load reductions, such as those resulting from consumers turning off lights or lowering heat, are not recovered, thus reducing the household’s total electricity consumption.
  • Dynamic pricing programs give participants a more detailed awareness of how they use electricity, which in turn results in lower consumption.
  • These programs usually increase the amount of usage information or feedback received by the customer, which also encourages lower consumption.

The key challenge for utilities and policy makers comes in deciding which pricing and communications structures will most actively engage their customers and drive the desired conservation behaviors. Studies show that good customer feedback on energy usage can reduce total consumption by 5 to 10 percent. Smart meters let customers readily access more up-to-date information about their hourly, daily and monthly energy usage via in-home displays, websites and even monthly bill inserts.

The smart metering program undertaken by the province of Ontario, Canada, presents one approach and serves as a useful example for utility companies contemplating similar deployments.

ONTARIO’S PROGRAM

In 2004, anticipating a serious energy generation shortfall in coming years, the government of Ontario announced plans to have smart electricity meters installed in 800,000 homes and small businesses by the end of 2007, and throughout Ontario by 2010. The initiative will affect approximately 4.5 million customers.

As the regulator of Ontario’s electricity industry, the Ontario Energy Board (OEB) was responsible for designing the smart prices that would go with these smart meters. The plan was to introduce flexible, time-of-use electricity pricing to encourage conservation and peak demand shifting. In June 2006, the OEB commissioned IBM to manage a pilot program that would help determine the best structure for prices and the best ways to communicate these prices.

By Aug. 1, 2006, 375 residential customers in the Ottawa area of Ontario had been recruited into a seven-month pilot program. Customers were promised $50 as an incentive for remaining on the pilot for the full period and $25 for completing the pilot survey.

Pilot participants continued to receive and pay their “normal” bimonthly utility bills. Separately, participants received monthly electricity usage statements that showed their electricity supply charges on their respective pilot price plan, as illustrated in Figure 1. Customers were not provided with any other new channels for information, such as a website or in-home display.

A control group that continued being billed at standard rates was also included in the study. Three pricing structures were tested in the pilot, with 125 customers in each group:

  • Time-of-use (TOU). Ontario’s TOU pricing includes off-peak, mid-peak and peak prices that changed by winter and summer season.
  • TOU with CPP. Customers were notified a day in advance that the price of the electricity commodity (not delivery) for three or four hours the next day would increase to 30 cents per kilowatt hour (kWh) – nearly six times the average TOU price. Seven critical peak events were declared during the pilot period – four in summer and three in winter. Figure 2 shows the different pricing levels.
  • TOU with CPR. During the same critical peak hours as CPP, participants were provided a rebate for reductions below their “baseline” usage. The base was calculated as the average usage for the same hours of the five previous nonevent, non-holiday weekdays, multiplied by 125 percent.

The results from the Ontario pilot clearly demonstrate that customers want to be engaged and involved in their energy service and use. Consider the following:

  • Within the first week, and before enrollment was suspended, more than 450 customers responded to the invitation letter and submitted requests to be part of the pilot – a remarkable 25 percent response rate. In subsequent focus groups, participants emphasized a desire to better monitor their own electricity usage and give the OEB feedback on the design of the pricing. These were in fact the primary reasons cited for enrolling in the pilot.
  • In comparison to the control group, total load shifting during the four summertime critical peak periods ranged from 5.7 percent for TOU-only participants to 25.4 percent for CPP participants.
  • By comparing the usage of the treatment and control groups before and during the pilot, a substantial average conservation effect of 6 percent was recorded across all customers.
  • Over the course of the entire pilot period, on average, participants shifted consumption and paid 3 percent, or $1.44, less on monthly bills with the TOU pilot prices, compared with what they would have paid using the regular electricity prices charged by their utility. Of all participants, 75 percent saved money on TOU prices. Figure 3 illustrates the distribution of savings.
  • When this shift in consumption was combined with the reduction in customers’ overall consumption, a total average monthly savings of more than $4 resulted. From this perspective, 93 percent of customers would pay less on the TOU prices over the course of the pilot program than they would have with the regular electricity prices charged by their utility.
  • Citing greater control of their energy costs and benefits to the environment, 7 percent of participants surveyed said they would recommend TOU pricing to their friends.

There were also some unexpected results. For instance, there was no pattern of customers shifting demand away from the dinnertime peak period in winter. In addition, TOU-only pricing alone did not result in a statistically significant shifting of power away from peak periods.

CONCLUSION

In summary, participants in the Ontario Energy Board’s pilot program approved of these smarter pricing structures, used less energy overall, shifted consumption from peak periods in the summertime and, as a result, most paid less on their utility bills.

Over the next decade, as the utility industry evolves to the intelligent utility network and smart metering technologies are deployed to all customers, utilities will have many opportunities to implement new electricity pricing structures. This transition will represent a considerable technical challenge, testing the limits of the latest communications, data management, engineering, metering and security technologies.

But the greater challenge may come from customers. Much of the benefit from smart metering is directly tied to real, measurable and predictable changes in how customers use energy and interact with their utility provider. Capturing this benefit requires successful manipulation of the complex interactions of economic incentives, consumer behavior and societal change. Studies such as the OEB Smart Pricing Pilot provide another step in penetrating this complexity, helping the utility industry better understand how customers react and interact with these new approaches.

Pepco Holdings, Inc.

The United States and the world are facing two preeminent energy challenges: the rising cost of energy and the impact of increasing energy use on the environment. As a regulated public utility and one of the largest energy delivery companies in the Mid-Atlantic region, Pepco Holdings Inc. (PHI) recognized that it was uniquely positioned to play a leadership role in helping meet both of these challenges.

PHI calls the plan it developed to meet these challenges the Blueprint for the Future (Blueprint). The plan builds on work already begun through PHI’s Utility of the Future initiative, as well as other programs. The Blueprint focuses on implementing advanced technologies and energy efficiency programs to improve service to its customers and enable them to manage their energy use and costs. By providing tools for nearly 2 million customers across three states and the district of Columbia to better control their electricity use, PHI believes it can make a major contribution to meeting the nation’s energy and environmental challenges, and at the same time help customers keep their electric and natural gas bills as low as possible.

The PHI Blueprint is designed to give customers what they want: reasonable and stable energy costs, responsive customer service, power reliability and environmental stewardship.

PHI is deploying a number of innovative technologies. Some, such as its automated distribution system, help to improve reliability and workforce productivity. Other systems, including an advanced metering infrastructure (AMI), will enable customers to monitor and control their electricity use, reduce their energy costs and gain access to innovative rate options.

PHI’s Blueprint is both ambitious and complex. Over the next five years PHI will be deploying new technologies, modifying and/or creating numerous information systems, redefining customer and operating work processes, restructuring organizations, and managing relationships with customers and regulators in four jurisdictions. PHI intends to do all of this while continuing to provide safe and reliable energy service to its customers.

To assist in developing and executing this plan, PHI reached out to peer utilities and vendors. One significant “partner” group is the Global Intelligent Utility network Coalition (GIUNC), established by IBM, which currently includes CenterPoint Energy (Texas), Country Energy (new South Wales, Australia) and PHI.

Leveraging these resources and others, PHI managers spent much of 2007 compiling detailed plans for realizing the Blueprint. Several aspects of these planning efforts are described below.

VISION AND DESIGN

In 2007, multiple initiatives were launched to flesh out the many aspects of the Blueprint. As Figure 1 illustrates, all of the initiatives were related and designed to generate a deployment plan based on a comprehensive review of the business and technical aspects of the project.

At this early stage, PHI does not yet have all the answers. Indeed, prematurely committing to specific technologies or designs for work that will not be completed for five years can raise the risk of obsolescence and lost investment. The deployment plan and system map, discussed in more detail below, are intended to serve as a guide. They will be updated and modified as decision points are reached and new information becomes available.

BUSINESS CASE VALIDATION

One of the first tasks was to review and define in detail the business case analyses for the project components. Both benefit assumptions and implementation costs were tested. Reference information (benchmarks) for this review came from a variety of sources: IBM experience in projects of similar scope and type; PHI materials and analysis; experiences reported by other GIUNC members; and other utilities and other publicly available sources. This information was compiled, and a present value analysis was conducted on discounted cash flow and rate of return, as shown in Figure 2.

In addition to an “operational benefits” analysis, PHI and the Brattle Group developed value assessments associated with demand response offerings such as critical peak pricing. With demand response, peak consumption can be reduced and capacity cost avoided. This means lower total energy prices for customers and less new capacity additions in the market. As Figure 2 shows, in even the worst-case scenario for demand response savings, operational and customer benefits will offset the cost of PHI’s AMI investment.

The information from these various cases has since been integrated into a single program management tool. Additional capabilities for optimizing results based on value, cost and schedule were developed. Finally, dynamic relationships between variables were modeled and added to the tool, recognizing that assumptions don’t always remain constant as plans are changed. One example of this would be the likely increase in call center cost per meter when deployment accelerates and customer inquiries increase.

HIGH-LEVEL COMMUNICATIONS ARCHITECTURE DESIGN

To define and develop the communications architecture, PHI deployed a structured approach built around IBM’s proprietary optimal comparative communications architecture methodology (OCCAM). This methodology established the communications requirements for AMI, data architecture and other technologies considered in the Blueprint. Next, an evaluation of existing communications infrastructure and capabilities was conducted, which could be leveraged in support of the new technologies. Then, alternative solutions to “close the gap” were reviewed. Finally, all of this information was incorporated in an analytical tool that matched the most appropriate communication technology within a specified geographic area and business need.

SYSTEM MAP AND INFORMATION MODEL

Defining the data framework and the approach to overall data integration elements across the program areas is essential if companies are to effectively and efficiently implement AMI systems and realize their identified benefits.

To help PHI understand what changes are needed to get from their current state to a shared vision of the future, the project team reviewed and documented the “current state” of the systems impacted by their plans. Then, subject matter experts with expertise in meters, billing, outage, system design, work and workforce management, and business data analysis were engaged to expand on the data architecture information, including information on systems, functions and the process flows that tie them all together. Finally, the information gathered was used to develop a shared vision of how PHI processes, functions, systems and data will fit together in the future.

By comparing the design of as-is systems with the to-be architecture of information management and information flows, PHI identified information gaps and developed a set of next steps. One key step establishes an “enterprise architecture” model for development. The first objective would be to establish and enforce governance policies. With these in place, PHI will define, draft and ratify detailed enterprise architecture and enforce priorities, standards, procedures and processes.

PHASE 2 DEPLOYMENT PLAN

Based on the planning conducted over the last half of the year, a high-level project plan for Phase 2 deployment was compiled. The focus was mainly on Blueprint initiatives, while considering dependencies and constraints reported in other transformation initiatives. PHI subject matter experts, project team leads and experience gathered from other utilities were all leveraged to develop the Blueprint deployment plan.

The deployment plan includes multiple types of tasks; processes; and organization, technical and project management office-related activities, and covers a period of five to six years. Initiatives will be deployed in multiple releases, phased across jurisdictions (Delaware, District of Columbia, Maryland, New Jersey) and coordinated between meter installation and communications infrastructure buildout schedules.

The plan incorporates several initiatives, including process design, system development, communications infrastructure and AMI, and various customer initiatives. Because these initiatives are interrelated and complex, some programmatic initiatives are also called for, including change management, benefits realization and program management. From this deployment plan, more detailed project plans and dependencies are being developed to provide PHI with an end-to-end view of implementation.

As part of the planning effort, key risk areas for the Blueprint program were also defined, as shown in Figure 3. Input from interviews and knowledge leveraged from similar projects were included to ensure a comprehensive understanding of program risks and to begin developing mitigation strategies.

CONCLUSION

As PHI moves forward with implementation of its AMI systems, new issues and challenges are certain to arise, and programmatic elements are being established to respond. A program management office has been established and continues to drive more detail into plans while tracking and reporting progress against active elements. AMI process development is providing the details for business requirements, and system architecture discussions are resolving interface issues.

Deployment is still in its early stages, and much work lies ahead. However, with the effort grounded in a clear vision, the journey ahead looks promising.

Advanced Metering Infrastructure: The Case for Transformation

Although the most basic operational benefits of an advanced metering infrastructure (AMI) initiative can be achieved by simply implementing standard technological features and revamping existing processes, this approach fails to leverage the full potential of AMI to redefine the customer experience and transform the utility operating model. In addition to the obvious operational benefits – including a significant reduction in field personnel and a decrease in peak load on the system – AMI solutions have the potential to achieve broader strategic, environmental and regulatory benefits by redefining the utility-customer relationship. To capture these broader benefits, however, utilities must view AMI as a transformation initiative, not simply a technology implementation project. Utilities must couple their AMI implementations with a broader operational overhaul and take a structured approach to applying the operating capabilities required to take advantage of AMI’s vast opportunities. One key step in this structured approach to transformation is enterprise-wide business process design.

WHY “AS IS” PROCESSES WON’T WORK FOR AMI

Due to the antiquated and fragmented nature of utility processes and systems, adapting “as is” processes alone will not be sufficient to realize the full range of AMI benefits. Multiple decades of industry consolidation have resulted in utilities with diverse business processes reflecting multiple legacy company operating practices. Associated with these diverse business processes is a redundant set of largely homegrown applications resulting in operational inefficiencies that may impact customer service and reliability, and prevent utilities from adapting to new strategic initiatives (such as AMI) as they emerge.

For example, in the as-is environment, utilities are often slow to react to changes in customer preferences and require multiple functional areas to respond to a simple customer request. A request by a customer to enroll in a new program, for example, will involve at least three organizations within the utility: the call center initially handles the customer request; the field services group manages changing or reprogramming the customer’s meter to support the new program; and the billing group processes the request to ensure that the customer is correctly enrolled in the program and is billed accordingly. In most cases, a simple request like this can result in long delays to the customer due to disjointed processes with multiple hand-off points.

WHY USE AMI AS THE CATALYST FOR OPERATIONAL TRANSFORMATION?

The revolutionary nature of AMI technology and its potential for application to multiple areas of the utility makes an AMI implementation the perfect opportunity to adapt the utility operating structure. To use AMI as a platform for operational transformation, utilities must shift their thought paradigm from functionally based to enterprise-wide, process-centric environments. This approach will ensure that utilities take full advantage of AMI’s technological capabilities without being constrained by existing processes and organizational structures.

If the utility is to offer new programs and services as well as respond to shifting external demands, it must anticipate and respond quickly to changes in behaviors. Rapid information dissemination and quick response to changes in business, environmental and economic situations are essential for utilities that wish to encourage customers to think of energy in a new way and proactively manage their usage through participation in time-of-use and real-time demand response programs. This transition requires that system and organizational hand-offs be integrated to create a seamless and flexible work flow. Without this integration, utilities cannot proactively and quickly adapt processes to satisfy ever-increasing customer expectations. In essence, AMI fails if “smart meters” and “smart systems” are implemented without “smart processes” to support them.

DESIGNING SMART PROCESSES

Designing smart future state business processes to support transformational initiatives such as AMI involves more than just rearranging existing works flows. Instead, a utility must adopt a comprehensive approach to business process design – one that engages stakeholders throughout the organization and that enables them to design processes from the ground up. The utility must also design flexible processes that can adapt to changing customer, technology, business and regulatory expectations while avoiding the pitfalls of the current organization and process structure. As part of a utility’s business process design effort, it must also redefine jobs more broadly, increase training to support those jobs, enable decision making by front-line personnel and redirect rewards systems to focus on processes as well as outcomes. Utilities must also reshape organizational cultures to emphasize teamwork, personnel accountability and the customer’s importance; to redefine roles and responsibilities so that managers oversee processes instead of activities and develop people rather then supervise them; and to realign information system so that they help cross-functional processes work smoothly rather than simply support individual functional areas.

BUSINESS PROCESS DESIGN FRAMEWORK

IBM’s enterprise-wide business process design framework provides a structured approach to the development of the future state processes that support operational transformations and the complexities of AMI initiatives. This framework empowers utilities to apply business process design as the cornerstone of a broader effort to transition to a customer-centric organization capable of engaging external stakeholders. In addition, this framework also supports corporate decision making and continuous improvement by emphasizing real-time metrics and measurement of operational procedures. The framework is made up of the following five phases (Figure 1):

Phase 1 – As-is functional assessment. During this phase, utilities assess their current state processes and supporting organizations and systems. The goal of this phase is to identify gaps, overlaps and conflicts with existing processes and to identify opportunities to leverage the AMI technology. This assessment requires utility stakeholders to dissect existing process throughout the organization and identify instances where the utility is unable to fully meet customer, environmental and regulatory demands. The final step in this phase is to define a set of “future state” goals to guide process development. These goals must address all of the relevant opportunities to both improve existing processes and perform new functions and services.

Phase 2 – Future state process analysis. During this phase, utilities design end-to-end processes that meet the future state goals defined in Phase 1. To complete this effort, utilities must synthesize components from multiple functional areas and think outside the current organizational hierarchy. This phase requires engagement from participants throughout the utility organization, and participants should be encouraged to envision all relevant opportunities for using AMI to improve the utility’s relationship with customers, regulators and the environment. At the conclusion of this phase, all processes should be assessed in terms of their ability to alleviate the current state issues and to meet the future state goals defined in Phase 1.

Phase 3 – Impact identification. During this phase, utilities identify the organizational structure and corporate initiatives necessary to “operationalize” the future state processes. Key questions answered during this phase include how will utilities transition from current to future state? How will each functional area absorb the necessary changes? And what are the new organizations, roles and skills needed? This phase requires the utility to think outside of the current organizational structure to identify the optimal way to support the processes designed in Phase 2. During the impact identification phase of business, it’s crucial that process be positioned as the dominant organizational axis. Because process-organized utilities are not bound to a conventional hierarchy or fixed organizational structure, they can be customer-centric, make flexible use of their resources and respond rapidly to new business situations.

Phase 4 – Socialization. During this phase, utilities focus on obtaining ownership and buy-in from the impacted organizations and broader group of internal and external stakeholders. This phase often involves piloting the new processes and technology in a test environment and reaching out to a small set of customers to solicit feedback. This phase is also marked by the transition of the products from the first three phases of the business process design effort to the teams affected by the new processes – namely the impacted business areas as well as the organizational change management and information technology teams.

Phase 5 – Implementation and measurement. During the final phase of the business process design framework, the utility transitions from planning and design to implementation. The first step of this phase is to define the metrics and key performance indicators (KPIs) that will be used to measure the success of the new processes – necessary if organizations and managers are to be held responsible for the new processes, and for guiding continuous refinement and improvement. After these metrics have been established, the new organizational structure is put in place and the new processes are introduced to this structure.

BENEFITS AND CHALLENGES OF BUSINESS PROCESS DESIGN

The business process design framework outlined above facilitates the permeation of the utility goals and objectives throughout the entire organization. This effort does not succeed, though, without significant participation from internal stakeholders and strong sponsorship from key executives.

The benefits of this approach include the following:

  • It facilitates ownership. Because the management team is engaged at the beginning of the AMI transformation, managers are encouraged to own future state processes from initial design through implementation.
  • It identifies key issues. A comprehensive business design effort allows for earlier visibility into key integration issues and provides ample time to resolve them prior to rolling out the technologies to the field.
  • It promotes additional capabilities. The business process framework enables the utility to develop innovative ways to apply the AMI technology and ensures that future state processes are aligned to business outcomes.
  • It puts the focus on customers. A thorough business process effort ensures that the necessary processes and functional groups are put in place to empower and inform the utility customer.

The challenges of this approach include the following:

  • It entails a complex transition. The utility must manage the complexities and ambiguities of shifting from functional-based operations to process-based management and decision making.
  • It can lead to high expectations. The utility must also manage stakeholder expectations and be clear that change will be slow and painful. Revolutionary change is made through evolutionary steps – meaning that utilities cannot expect to take very large steps at any point in the process.
  • There may be technological limitations. Throughout the business process design effort, utilities will identify new ways to improve customer satisfaction through the use of AMI technology. The standard technology, however, may not always support these visions; thus, utilities must be prepared to work with vendors to support the new processes.

Although execution of future state business process design undoubtedly requires a high degree of effort, a successful operational transformation is necessary to truly leverage the features of AMI technology. If utilities expect to achieve broad-reaching benefits, they must put in place the operational and organization structures to support the transformational initiatives. Utilities cannot afford to think of AMI as a standard technology implementation or to jump immediately to the definition of system and technology requirements. This approach will inevitably limit the impact of AMI solutions and leave utilities implementing cutting-edge technology with fragmented processes and inflexible, functionally based organizational structures.

Smart Metering Options for Electric and Gas Utilities

Should utilities replace current consumption meters with “smart metering” systems that provide more information to both utilities and customers? Increasingly, the answer is yes. Today, utilities and customers are beginning to see the advantages of metering systems that provide:

  • Two-way communication between the utility and the meter; and
  • Measurement that goes beyond a single consolidated quarterly or monthly consumption total to include time-of-use and interval measurement.

For many, “smart metering” is synonymous with an advanced metering infrastructure (AMI) that collects, processes and distributes metered data effectively across the entire utility as well as to the customer base (Figure 1).

SMART METERING REVOLUTIONIZES UTILITY REVENUE AND SERVICE POTENTIAL

When strategically evaluated and deployed, smart metering can deliver a wide variety of benefits to utilities.

Financial Benefits

  • Significantly speeds cash flow and associated earnings on revenue. Smart metering permits utilities to read meters and send the data directly to the billing application. Bills go out immediately, cutting days off the meter-to-cash cycle.
  • Improves return on investment via faster processing of final bills. Customers can request disconnects as the moving van pulls away. Smart metering polls the meter and gives the customer the amount of the final bill. Online or credit card payments effectively transform final bill collection cycles from a matter of weeks to a matter of seconds.
  • Reduces bad debt. Smart metering helps prevent bad debt by facilitating the use of prepayment meters. It also reduces the size of overdue bills by enabling remote disconnects, which do not depend on crew availability.

Operational Cost Reductions

  • Slashes the cost to connect and disconnect customers. Smart metering can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property.
  • Lowers insurance and legal costs. Field crew insurance costs are high – and they’re even higher for employees subject to stress and injury while disconnecting customers with past-due bills. Remote disconnects through smart metering lower these costs. They also reduce medical leave, disability pay and compensation claims. Remote disconnects also significantly cut the number of days that employees and lawyers spend on perpetrator prosecutions and attempts to recoup damages.
  • Cuts the costs of managing vegetation. Smart metering can pinpoint blinkouts, reducing the cost of unnecessary tree trimming.
  • Reduces grid-related capital expenses. With smart metering, network managers can analyze and improve block-by-block power flows. Distribution planners can better size transformers. Engineers can identify and resolve bottlenecks and other inefficiencies. The benefits include increased throughput and reductions in grid overbuilding.
  • Shaves supply costs. Supply managers use interval data to fine-tune supply portfolios. Because smart metering enables more efficient procurement and delivery, supply costs decline.
  • Cuts fuel costs. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary truck rolls. Reduces theft. Smart metering can identify illegal attempts to reconnect meters, or to use energy and water in supposedly vacant premises. It can also detect theft by comparing flows through a valve or transformer with billed consumption.

Compliance Monitoring

  • Ensures contract compliance. Gas utilities can use one-hour interval meters to monitor compliance from interruptible, or “non-core,” customers and to levy fines against contract violators.
  • Ensures regulatory compliance. Utilities can monitor the compliance of customers with significant outdoor lighting by comparing similar intervals before and during a restricted time period. For example, a jurisdiction near a wildlife area might order customers to turn off outdoor lighting so as to promote breeding and species survival.
  • Reduces outage duration by identifying outages more quickly and pinpointing outage and nested outage locations. Smart metering also permits utilities to ensure outage resolution at every meter location.
  • Sizes outages more accurately. Utilities can ensure that they dispatch crews with the skills needed – and adequate numbers of personnel – to handle a specific job.
  • Provides updates on outage location and expected duration. Smart metering helps call centers inform customers about the timing of service restoration. It also facilitates display of outage maps for customer and public service use.
  • Detect voltage fluctuations. Smart metering can gather and report voltage data. Customer satisfaction rises with rapid resolution of voltage issues.

New Services

For utilities that offer services besides commodity delivery, smart metering provides an entry to such new business opportunities as:

  • Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents.
  • Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart metering enables utilities to alert owners or managers to a need for maintenance or replacement.
  • Facilitating home and small-business networks. Smart metering can provide a gateway to equipment networks that automate control or permit owners to access equipment remotely. Smart metering also facilitates net metering, offering some utilities a path toward involvement in small-scale solar or wind generation.

Environmental Improvements

Many of the smart metering benefits listed above include obvious environmental benefits. When smart metering lowers a utility’s fuel consumption or slows grid expansion, cleaner air and a better preserved landscape result. Smart metering also facilitates conservation through:

  • Leak detection. When interval reads identify premises where water or gas consumption never drops to zero, leaks are an obvious suspect.
  • Demand response and critical peak pricing. Demand response encourages more complete use of existing base power. Employed in conjunction with critical peak pricing, it also reduces peak usage, lowering needs for new generators and transmission corridors.
  • Load control. With the consent of the owner, smart metering permits utilities or other third parties to reduce energy use inside a home or office under defined circumstances.

CHALLENGES IN SMART METERING

Utilities preparing to deploy smart metering systems need to consider these important factors:

System Intelligence. There’s a continuing debate in the utility industry as to whether smart metering intelligence should be distributed or centralized. Initial discussions of advanced metering tended to assume intelligence embedded in meters. Distributed intelligence seemed part of a trend, comparable to “smart cards,” “smart locks” and scores of other everyday devices with embedded computing power.

Today, industry consensus favors centralized intelligence. Why? Because while data processing for purposes of interval billing can take place in either distributed or central locations, other applications for interval data and related communications systems cannot. In fact, utilities that opt for processing data at the meter frequently make it impossible to realize a number of the benefits listed above.

Data Volume. Smart metering inevitably increases the amount of meter data that utilities must handle. In the residential arena, for instance, using hour-long measurement intervals rather than monthly consumption totals replaces 12 annual reads per customer with 8,760 reads – a 730-fold increase.

In most utilities today, billing departments “own” metering data. Interval meter reads, however, are useful to many departments. These readings can provide information on load size and shape – data that can then be analyzed to help reduce generation and supply portfolio costs. Interval reads are even more valuable when combined with metering features like two-way communication between meter and utility, voltage monitoring and “last gasp” messages that signal outages.

This new data provides departments outside billing with an information treasure trove. But when billing departments control the data, others frequently must wait for access lest they risk slowing down billing to a point that damages revenue flow.

Meter Data Management. An alternative way to handle data volume and multiple data requests is to offload it into a stand-alone meter data management (MDM) application.

MDM applications gather and store meter data. They can also perform the preliminary processing required for different departments and programs. Most important, MDM gives all units equal access to commonly held meter data resources (Figure 2).

MDM provides an easy pathway between data and the multiple applications and departments that need it. Utilities can more easily consolidate and integrate data from multiple meter types, and reduce the cost of building and maintaining application interfaces. Finally, MDM provides a place to store and use data, whose flow into the system cannot be regulated – for example, in situations such as the flood of almost simultaneous messages from tens of thousands of meters sending a “last gasp” during a major outage.

WEIGHING THE COSTS AND BENEFITS OF SMART METERING

Smart metering on a mass scale is relatively new. No utility can answer all questions in advance. There are ways, however, to mitigate the risks:

Consider all potential benefits. Smart metering may be a difficult cost to justify if it rests solely on customer acceptance of demand response. Smart metering is easier to cost-justify when its deployment includes, for instance, the value of the many benefits listed above.

Evaluate pilots. Technology publications are full of stories about successful pilots followed by unsuccessful products. That’s because pilots frequently protect participants from harsh financial consequences. And it’s difficult for utility personnel to avoid spending time and attention on participants in ways that encourage them to buy into the program. Real-life program rollouts lack these elements.

Complicating the problem are likely differences between long-term and short-term behavior. The history of gasoline conservation programs suggests that while consumers initially embrace incentives to car pool or use public transportation, few make such changes on a permanent basis.

Examining the experiences of utilities in the smart metering forefront – in Italy, for example, or in California and Idaho – may provide more information than a pilot.

Develop a complete business case. Determining the cost-benefit ratio of smart metering is challenging. Some costs – for example, meter prices and installation charges – may be relatively easy to determine. Others require careful calculations. As an example, when interval meters replace time-of-use meters, how does the higher cost of interval meters weigh against the fact that they don’t require time-of-use manual reprogramming?

As in any business case, some costs must be estimated:

  • Will customer sign-up equal the number needed to break even?
  • How long will the new meters last?
  • Do current meter readers need to be retrained, and if so, what will that cost?
  • Will smart metering help retain customers that might otherwise be lost?
  • Can new services such as equipment efficiency analyses be offered, and if so, how much should the utility charge for them?

Since some utilities are already rolling out smart metering programs, it’s becoming easier to obtain real-life numbers (rather than estimates) to plug into your business case.

CONSIDER ALTERNATIVES

Technology is “smart” only when it reduces the cost of obtaining specified objectives. Utilities may find it valuable to try lower-cost routes to some results, including:

  • Customer charges to prevent unnecessary truck rolls. Such fees are common among telephone service providers and have worked well for some gas utilities responding to repeated false alarms from householder-installed carbon monoxide detectors.
  • Time-of-use billing with time/rate relationships that remain constant for a year or more. This gives consumers opportunities to make time-shifting a habit.
  • Customer education to encourage consumers to use the time-shifting features on their appliances as a contribution to the environment. Most consumers have no idea that electricity goes to waste at night. Keeping emissions out of the air and transmission towers out of the landscape could be far more compelling to many consumers than a relatively small saving resulting from an on- and off-peak pricing differential.
  • Month-to-month rate variability. One study found that approximately a third of the efficiency gains from real-time interval pricing could be captured by simply varying the flat retail rates monthly – and at no additional cost for metering. [1] While a third of the efficiency gains might not be enough to attain long-term goals, they might be enough to fill in a shorter-term deficit, permitting technology costs and regulatory climates to stabilize before decisions must be made.
  • Multitier pricing based on consumption. Today, two-tier pricing – that is, a lower rate for the first few-hundred kilowatt-hours per month and a higher rate for additional hours – is common. However, three or four tiers might better capture the attention of those whose consumption is particularly high – owners of large homes and pool heaters, for instance – without burdening those at the lower end of the economic ladder. Tiers plus exception handling for hardships like high-consuming medical equipment would almost certainly be less difficult and expensive than universal interval metering.

A thorough evaluation of the benefits and challenges of advanced metering systems, along with an understanding of alternative means to achieving those benefits, is essential to utilities considering deployment of advanced metering systems.

Note: The preceding was excerpted from the Oracle white paper “Smart Metering for Electric and Gas Utilities.” To receive the complete paper, Email oracleutilities_ww@oracle.com.

ENDNOTE

  1. Holland and Mansur, “The Distributional and Environmental Effects of Time-varying Prices in Competitive Electricity Markets.” Results published in “If RTP Is So Great, Why Don’t We See More of It?” Center for the Study of Energy Markets Research Review, University of California Energy Institute, Spring 2006. Available at www.ucei.berkeley.edu/

Intelligent Communications Platform Provides Foundation for Clean Technology Solutions to Smart Grid

Since the wake-up call of the 2003 blackout in the northeastern United States and Canada, there’s been a steady push to improve the North American power grid. Legislation in both the United States and Canada has encouraged investments in technologies intended to make the grid intelligent and to solve critical energy issues. The Energy Policy Act (EPAct) of 2005 mandated that each state evaluate the business case for advanced metering infrastructure (AMI). In Ontario, the Energy Conservation Responsibility Act of 2006 mandated deployment of smart meters to all consumers by 2010. And the recent U.S. Energy Independence and Security Act of 2007 expands support from the U.S. government for investments in smart grid technologies while further emphasizing the need for the power industry to play a leadership role in addressing carbon dioxide emissions affecting climate change.

Recent state-level legislation and consumer sentiment suggest an increasing appetite for investments in distributed clean-technology energy solutions. Distributed generation technologies such as solar, wind and bio-diesel are becoming more readily available and have the potential to significantly improve grid operations and reliability.

THE NEXT STEP

Although the full vision for the smart grid is still somewhat undefined, most agree that an intelligent communications platform is a necessary foundation for developing and realizing this vision. Of the 10 elements that define the smart grid as contained within the Energy Act of 2007, more than half directly relate to or involve advanced capabilities for advanced communications.

A core business driver for intelligent communications is full deployment of smart metering, also referred to as advanced metering infrastructure. AMI involves automated measurement of time-of-use energy consumption – at either hourly or 15-minute intervals – and provides for new time-of-use rates that encourage consumers to use energy during off-peak hours when generation costs are low rather than peak periods when generation costs are high and the grid is under stress. With time-of-use rates, consumers may continue to use power during high peak periods but will pay a higher price to do so. AMI may also include remote service switch functionality that can reduce costs associated with site visits otherwise required to manage move-out/move-ins or to support prepayment programs.

Other smart grid capabilities that may be easily realized through the deployment of intelligent communications and AMI include improved outage management detection and restoration monitoring, revenue assurance and virtual metering of distribution assets.

CRITICAL ATTRIBUTES OF AMI SOLUTIONS

Modern communications network solutions leverage standards-based technology such as IEEE 802.15.4 to provide robust two-way wireless mesh network communications to intelligent devices. The intelligent communications platform should provide for remote firmware upgrades to connected intelligent devices and be capable of leveraging Internet protocol-based communications across multiple wide-area network (WAN) options (Figure 1).

Critical for maximizing the value of a communications infrastructure investment is support for broad interoperability and interconnectivity. Interoperability for AMI applications means supporting a range of options for metering devices. A communications platform system should be meter manufacturer-independent, empowering choice for utilities. This provides for current and future competitiveness for the meter itself, which is one of the more expensive elements of the smart metering solution.

Interconnectivity for communications platforms refers to the ability to support a broad range of functions, both end-point devices and systems at the head end. To support demand-side management and energy-efficiency initiatives, an intelligent communications platform should support programmable communicating thermostats (PCTs), in-home displays (IHDs) and load control switches.

The system may also support standards-based home-area networks (HANs) such as ZigBee and Zensys. Ultimately an intelligent communications platform should support a model whereby third-party manufacturers can develop solutions that operate on the network, providing competitive options for utilities.

For enterprise system interconnectivity, an AMI demand-side management or other smart grid head-end application should be developed using service-oriented architecture (SOA) principles and Web technologies. These applications should also support modern Web services-based solutions, providing published simple object access protocol (SOAP)-based APIs. This approach provides for easier integration with existing enterprise systems and simplifies the process of adding functionality (either through enhancements provided by the vendor or add-ons delivered by third parties or developed by the utility).

Finally, the value of an intelligent communications platform deployment is driven by the ability of other enterprise applications and processes to utilize the vast amount of new data received through the AMI , demand side management and smart grid applications. Core areas of extended value include integration with customer information systems and call center processes, and integration with outage management and work management systems. In addition, the intelligent communications platform makes utilities much better able to market new offerings to targeted customers based on their energy consumption profiles while also empowering consumers with new tools and access to information. The result: greater control over energy consumption costs and improved satisfaction.

INTEGRATION OF DISTRIBUTED GENERATION RESOURCES

Deployment and integration of distributed generation, including renewable resources, is an important supply-side element of the smart grid vision. This may include the installation of arrays of solar photovoltaic panels on home and office roofs, solar carports, small wind (3-5kvA) turbines, small biogas turbines and fuel cells.

By integrating these resources into a common communications platform, utilities have the opportunity to develop solutions that achieve much greater results than those provided simply by the sum of independent systems. For example, intelligent plug-in hybrid electric vehicles (PHEvs) connected to a smart solar carport may choose when to purchase power for charging the car or even to sell power back to the grid in a vehicle-to-grid (v2G) model based on dynamic price signals received through the communications platform. By maintaining intelligence at the edge of the grid, consumers and distributed resource owners can be empowered to manage to their own benefits and the grid as a whole.

SUMMARY

Now is the time to embark on realizing the smart grid vision. Global warming and system reliability issues are driving a sense of urgency. An intelligent communications platform provides a foundation capable of supporting multiple devices in multiple environments – commercial, industrial and residential – working seamlessly together in a single unified network.

All of the technical assets of a smart grid can be managed holistically rather than as isolated or poorly connected parts. The power of a network grows geometrically according to the amount of resources and assets actively connected to it. This is the future of the smart grid, and it’s available today.