Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.

Using Analytics for Better Mobile Technology Decisions

Mobile computing capabilities have been proven to drive business value by providing traveling executives, field workers and customer service personnel with real-time access to customer data. Better and more timely access to information shortens response times, improves accuracy and makes the workforce more productive.

However, although your organization may agree that technology can improve business processes, different stakeholders – IT management, financial and business leadership and operations personnel – often have different perspectives on the real costs and value of mobility. For example, operations wants tools that help employees work faster and focus more intently on the customer; finance wants the solution that costs the least amount this quarter; and IT wants to implement mobile projects that can succeed without draining resources from other initiatives.

It may not be obvious, but there are ways to achieve everyone’s goals. Analytics can help operations, finance and IT find common ground. When teams understand the data, they can understand the logic. And when they understand the logic they can support making the right decision.

EXPOSING THE FORMULA

Deploying mobile technology is a strategic initiative with far-reaching consequences for the health of an enterprise. In the midst of evaluating a mobile project, however, it’s easy to forget that the real goal of hardware-acquisition initiatives is to make the workforce more productive and improve both the top and bottom lines over the long term.

Most decision-analytics tools focus on up-front procurement questions alone, because the numbers seem straightforward and uncomplicated. But these analyses miss the point. The best analysis is one that can determine which of the solutions will provide the most advantages to the workforce at the lowest possible overall cost to the organization.

To achieve the best return on investment we must do more than recoup an out-of-pocket expense: Are customers better served? Are employees working better, faster, smarter? Though hard to quantify, these are the fundamental aspects that determine the return on investment (ROI) of technology.

It’s possible to build a vendor-neutral analysis to calculate the total cost of ownership (TCO) and ROI of mobile computers. Panasonic Computer Solutions Company, the manufacturer of Toughbook notebooks, enlisted the services of my analytics company, Serious Networks, Inc., to develop an unbiased TCO/ROI application to help companies make better decisions when purchasing mobile computers.

The Panasonic-sponsored operational analysis tool provides statistically valid answers by performing a simulation of the devices as they would be used and managed in the field, generating a model that compares the costs and benefits of multiple manufacturers’ laptops. Purchase cost, projected downtime, the range of wireless options, notebook features, support and other related costs are all incorporated into this analytic toolset.

Using over 100 unique simulations with actual customers, four key TCO/ROI questions emerged:

  • What will it cost to buy a proposed notebook solution?
  • What will it cost to own it over the life of the project?
  • What will it cost to deploy and decommission the units?
  • What value will be created for the organization?

MOVING BEYOND GUESSTIMATES – CONSIDERING COSTS AND VALUE OVER A LIFETIME

There is no such thing as an average company, so an honest analysis uses actual corporate data instead of industry averages. Just because a device is the right choice for one company does not make it the right choice for yours.

An effective simulation takes into account the cost of each competing device, the number of units and the rate of deployment. It calculates the cost of maintaining a solution and establishes the value of productive time using real loaded labor rates or revenue hours. It considers buy versus lease questions and can extrapolate how features will be used in the field.

As real-world data is entered, the software determines which mobile computing solution is most likely to help the company reach its goals. Managers can perform what-if analyses by adjusting assumptions and re-running the simulation. Within this framework, managers will build a business case that forecasts the costs of each mobile device against the benefits derived over time (see Figures 1 and 2).

MAKING INTANGIBLES TANGIBLE

The 90-minute analysis process is very granular. It’s based on the industry segment – because it simulates the tasks of the workforce – and compares up to 10 competing devices.

Once devices are selected, purchase or lease prices are entered, followed by value-added benefits like no-fault warranties and on-site support. Intangible factors favoring one vendor over another, such as incumbency, are added to the data set. The size and rate of the deployment, as well as details that determine the cost of preparing the units for the workforce, are also considered.

Next the analysis accounts for the likelihood and cost of failure, using your own experience as a baseline. Somewhat surprisingly, the impact of failure is given less weight than most outside observers would expect. Reliability is important, but it’s not the only or most important attribute.

What is given more weight are productivity and operational enhancements, which can have a significantly greater financial impact than reliability, because statistically employees will spend much more of their time working than dealing with equipment malfunctions.

A matrix of features and key workforce behaviors is developed to examine the relative importance of touch screens, wireless and GPS, as well as each computer vendor’s ability to provide those features as standard or extra-cost equipment. The features are rated for their time and motion impact on your organization, and an operations efficiency score is applied to imitate real-world results.

During the session, the workforce is described in detail, because this information directly affects the cost and benefit. To assess the value of a telephone lineman’s time, for example, the system must know the average number of daily service orders, the percentage of those service calls that require re-work and whether linemen are normally in the field five, six or seven days a week.

Once the data is collected and input it can be modified to provide instantaneous what-if, heads-up and break-even analyses reports – without interference from the vendor. The model is built in Microsoft Excel so that anyone can assess the credibility of the analysis and determine independently that there are no hidden calculations or unfair formulas skewing the results.

CONCLUSION

The Panasonic simulation tool can help different organizations within a company come to consensus before making a buying decision. Analytics help clarify whether a purpose-built rugged or business-rugged system or some other commercial notebook solution is really the right choice for minimizing the TCO and maximizing the ROI of workforce mobility.

ABOUT THE AUTHOR

Jason Buk is an operations director at Serious Networks, Inc., a Denver-based business analytics firm. Serious Networks uses honest forecasting and rigorous analysis to determine what resources are most likely to increase the effectiveness of the workforce, meet corporate goals and manage risk in the future.

Is Your Mobile Workforce Truly Optimized?

ClickSoftware is the leading provider of mobile workforce management and service optimization solutions that create business value for service operations through higher levels of productivity, customer satisfaction and cost effectiveness. Combining educational, implementation and support services with best practices and its industry leading solutions, ClickSoftware drives service decision making across all levels of the organization.

Our mobile workforce management solution helps utilities empower mobile workers with accurate, real-time information for optimum service and quick on-site decision making. From proactive customer demand forecasting and capacity planning to real-time decision-making, incorporating scheduling, mobility and location-based services, ClickSoftware helps service organizations get the most out of their resources.

The IBM-ClickSoftware alliance provides the most comprehensive offering for Mobile Workforce and Asset Management powering the real-time service enterprise. Customers can benefit from maximized workforce productivity and customer satisfaction while controlling, and then minimizing, operational costs.

ClickSoftware provides a flexible, scalable and proven solution that has been deployed at many utility companies around the world. Highlights include the ability to:

  • Automatically update the schedule based on real-time information from the field;
  • Manage crews (parts and people);
  • Cover a wide variety of job types within one product – from short jobs requiring one person to multistage jobs needing a multi-person team over several days or weeks;
  • Balance regulatory, environmental and union compliance;
  • Continuously strive to raise the bar in operational excellence;
  • Incorporate street-level routing into the decision-making process; and
  • Plan for the catastrophic events and seasonal variability in field service operations.

The resulting value proposition to the customer is extremely compelling:

  • Typically, optimized scheduling and routing of the mobile workforce generates a 31 percent increase in jobs per day versus the industry average (Source: AFSMI survey 2003).
  • A variety of solutions, ranging from entry level to advanced, directly address the broad spectrum of pains experienced by service organizations around the world, including optimized scheduling, routing, mobile communications and integration of solutions components – within the service optimization solution itself and also into the CRM/ERP/EAM back end.
  • An entry level offering with a staged upgrade path toward a fully automated service optimization solution ensures that risk is managed and the most challenging of customer requirements may be met. This "least risk" approach for the customer is delivered by a comprehensive set of IBM business consulting, installation and support services.
  • The industry-proven credibility of ClickSoftware’s ServiceOptimization Suite, combined with IBM’s wireless middleware, software, hardware and business consulting services, provides the customer with the most effective platform for managing field service operations.

ClickSoftware’s customers represent a cross section of leaders in the utilities, telecommunications, computer and office equipment, home services, and capital equipment industries. Close to 100 customers around the world have employed ClickSoftware service optimization solutions and services to achieve optimal levels of field service.

To find out more visit www.clicksoftware.com or call 888.438.3308.

Achieving Decentralized Coordination In the Electric Power Industry

For the past century, the dominant business and regulatory paradigms in the electric power industry have been centralized economic and physical control. The ideas presented here and in my forthcoming book, Deregulation, Innovation, and Market Liberalization: Electricity Restructuring in a Constantly Evolving Environment (Routledge, 2008), comprise a different paradigm – decentralized economic and physical coordination – which will be achieved through contracts, transactions, price signals and integrated intertemporal wholesale and retail markets. Digital communication technologies – which are becoming ever more pervasive and affordable – are what make this decentralized coordination possible. In contrast to the “distributed control” concept often invoked by power systems engineers (in which distributed technology is used to enhance centralized control of a system), “decentralized coordination” represents a paradigm in which distributed agents themselves control part of the system, and in aggregate, their actions produce order: emergent order. [1]

Dynamic retail pricing, retail product differentiation and complementary end-use technologies provide the foundation for achieving decentralized coordination in the electric power industry. They bring timely information to consumers and enable them to participate in retail market processes; they also enable retailers to discover and satisfy the heterogeneous preferences of consumers, all of whom have private knowledge that’s unavailable to firms and regulators in the absence of such market processes. Institutions that facilitate this discovery through dynamic pricing and technology are crucial for achieving decentralized coordination. Thus, retail restructuring that allows dynamic pricing and product differentiation, doesn’t stifle the adoption of digital technology and reduces retail entry barriers is necessary if this value-creating decentralized coordination is to happen.

This paper presents a case study – the “GridWise Olympic Peninsula Testbed Demonstration Project” – that illustrates how digital end-use technology and dynamic pricing combine to provide value to residential customers while increasing network reliability and reducing required infrastructure investments through decentralized coordination. The availability (and increasing cost-effectiveness) of digital technologies enabling consumers to monitor and control their energy use and to see transparent price signals has made existing retail rate regulation obsolete. Instead, the policy recommendation that this analysis implies is that regulators should reduce entry barriers in retail markets and allow for dynamic pricing and product differentiation, which are the keys to achieving decentralized coordination.

THE KEYS: DYNAMIC PRICING, DIGITAL TECHNOLOGY

Dynamic pricing provides price signals that reflect variations in the actual costs and benefits of providing electricity at different times of the day. Some of the more sophisticated forms of dynamic pricing harness the dramatic improvements in information technology of the past 20 years to communicate these price signals to consumers. These same technological developments also give consumers a tool for managing their energy use, in either manual or automated form. Currently, with almost all U.S. consumers (even industrial and commercial ones) paying average prices, there’s little incentive for consumers to manage their consumption and shift it away from peak hours. This inelastic demand leads to more capital investment in power plants and transmission and distribution facilities than would occur if consumers could make choices based on their preferences and in the face of dynamic pricing.

Retail price regulation stifles the economic processes that lead to both static and dynamic efficiency. Keeping retail prices fixed truncates the information flow between wholesale and retail markets, and leads to inefficiency, price spikes and price volatility. Fixed retail rates for electric power service mean that the prices individual consumers pay bear little or no relation to the marginal cost of providing power in any given hour. Moreover, because retail prices don’t fluctuate, consumers are given no incentive to change their consumption as the marginal cost of producing electricity changes. This severing of incentives leads to inefficient energy consumption in the short run and also causes inappropriate investment in generation, transmission and distribution capacity in the long run. It has also stifled the implementation of technologies that enable customers to make active consumption decisions, even though communication technologies have become ubiquitous, affordable and user-friendly.

Dynamic pricing can include time-of-use (TOU) rates, which are different prices in blocks over a day (based on expected wholesale prices), or real-time pricing (RTP) in which actual market prices are transmitted to consumers, generally in increments of an hour or less. A TOU rate typically applies predetermined prices to specific time periods by day and by season. RTP differs from TOU mainly because RTP exposes consumers to unexpected variations (positive and negative) due to demand conditions, weather and other factors. In a sense, fixed retail rates and RTP are the end points of a continuum of how much price variability the consumer sees, and different types of TOU systems are points on that continuum. Thus, RTP is but one example of dynamic pricing. Both RTP and TOU provide better price signals to customers than current regulated average prices do. They also enable companies to sell, and customers to purchase, electric power service as a differentiated product.

TECHNOLOGY’S ROLE IN RETAIL CHOICE

Digital technologies are becoming increasingly available to reduce the cost of sending prices to people and their devices. The 2007 Galvin Electricity Initiative report “The Path to Perfect Power: New Technologies Advance Consumer Control” catalogs a variety of end-user technologies (from price-responsive appliances to wireless home automation systems) that can communicate electricity price signals to consumers, retain data on their consumption and be programmed to respond automatically to trigger prices that the consumer chooses based on his or her preferences. [2] Moreover, the two-way communication advanced metering infrastructure (AMI) that enables a retailer and consumer to have that data transparency is also proliferating (albeit slowly) and declining in price.

Dynamic pricing and the digital technology that enables communication of price information are symbiotic. Dynamic pricing in the absence of enabling technology is meaningless. Likewise, technology without economic signals to respond to is extremely limited in its ability to coordinate buyers and sellers in a way that optimizes network quality and resource use. [3] The combination of dynamic pricing and enabling technology changes the value proposition for the consumer from “I flip the switch, and the light comes on” to a more diverse and consumer-focused set of value-added services.

These diverse value-added services empower consumers and enable them to control their electricity choices with more granularity and precision than the environment in which they think solely of the total amount of electricity they consume. Digital metering and end-user devices also decrease transaction costs between buyers and sellers, lowering barriers to exchange and to the formation of particular markets and products.

Whether they take the form of building control systems that enable the consumer to see the amount of power used by each function performed in a building or appliances that can be programmed to behave differently based on changes in the retail price of electricity, these products and services provide customers with an opportunity to make better choices with more precision than ever before. In aggregate, these choices lead to better capacity utilization and better fuel resource utilization, and provide incentives for innovation to meet customers’ needs and capture their imaginations. In this sense, technological innovation and dynamic retail electricity pricing are at the heart of decentralized coordination in the electric power network.

EVIDENCE

Led by the Pacific Northwest National Laboratory (PNNL), the Olympic Peninsula GridWise Testbed Project served as a demonstration project to test a residential network with highly distributed intelligence and market-based dynamic pricing. [4] Washington’s Olympic Peninsula is an area of great scenic beauty, with population centers concentrated on the northern edge. The peninsula’s electricity distribution network is connected to the rest of the network through a single distribution substation. While the peninsula is experiencing economic growth and associated growth in electricity demand, the natural beauty of the area and other environmental concerns served as an impetus for area residents to explore options beyond simply building generation capacity on the peninsula or adding transmission capacity.

Thus, this project tested how the combination of enabling technologies and market-based dynamic pricing affected utilization of existing capacity, deferral of capital investment and the ability of distributed demand-side and supply-side resources to create system reliability. Two questions were of primary interest:

1) What dynamic pricing contracts do consumers find attractive, and how does enabling technology affect that choice?

2) To what extent will consumers choose to automate energy use decisions?

The project – which ran from April 2006 through March 2007 – included 130 broadband-enabled households with electric heating. Each household received a programmable communicating thermostat (PCT) with a visual user interface that allowed the consumer to program the thermostat for the home – specifically to respond to price signals, if desired. Households also received water heaters equipped with a GridFriendly appliance (GFA) controller chip developed at PNNL that enables the water heater to receive price signals and be programmed to respond automatically to those price signals. Consumers could control the sensitivity of the water heater through the PCT settings.

These households also participated in a market field experiment involving dynamic pricing. While they continued to purchase energy from their local utility at a fixed, discounted price, they also received a cash account with a predetermined balance, which was replenished quarterly. The energy use decisions they made would determine their overall bill, which was deducted from their cash account, and they were able to keep any difference as profit. The worst a household could do was a zero balance, so they were no worse off than if they had not participated in the experiment. At any time customers could log in to a secure website to see their current balances and determine the effectiveness of their energy use strategies.

On signing up for the project, the households received extensive information and education about the technologies available to them and the kinds of energy use strategies facilitated by these technologies. They were then asked to choose a retail pricing contract from three options: a fixed price contract (with an embedded price risk premium), a TOU contract with a variable critical peak price (CPP) component that could be called in periods of tight capacity or an RTP contract that would reflect a wholesale market-clearing price in five-minute intervals. The RTP was determined using a uniform price double auction in which buyers (households and commercial) submit bids and sellers submit offers simultaneously. This project represented the first instance in which a double auction retail market design was tested in electric power.

The households ranked the contracts and were then divided fairly evenly among the three types, along with a control group that received the enabling technologies and had their energy use monitored but did not participate in the dynamic pricing market experiment. All households received either their first or second choice; interestingly, more than two-thirds of the households ranked RTP as their first choice. This result counters the received wisdom that residential customers want only reliable service at low, stable prices.

According to the 2007 report on the project by D.J. Hammerstrom (and others), on average participants saved 10 percent on their electricity bills. [5] That report also includes the following findings about the project:

Result 1. For the RTP group, peak consumption decreased by 15 to 17 percent relative to what the peak would have been in the absence of the dynamic pricing – even though their overall energy consumption increased by approximately 4 percent. This flattening of the load duration curve indicates shifting some peak demand to nonpeak hours. Such shifting increases the system’s load factor, improving capacity utilization and reducing the need to invest in additional capacity, for a given level of demand. A 15 to 17 percent reduction is substantial and is similar in magnitude to the reductions seen in other dynamic pricing pilots.

After controlling for price response, weather effects and weekend days, the RTP group’s overall energy consumption was 4 percent higher than that of the fixed price group. This result, in combination with the load duration effect noted above, indicates that the overall effect of RTP dynamic pricing is to smooth consumption over time, not decrease it.

Result 2. The TOU group achieved both a large price elasticity of demand (-0.17), based on hourly data, and an overall energy reduction of approximately 20 percent relative to the fixed price group.

After controlling for price response, weather effects and weekend days, the TOU group’s overall energy consumption was 20 percent lower than that of the fixed price group. This result indicates that the TOU (with occasional critical peaks) pricing induced overall conservation – a result consistent with the results of the California SPP project. The estimated price elasticity of demand in the TOU group was -0.17, which is high relative to that observed in other projects. This elasticity suggests that the pricing coupled with the enabling end-use technology amplifies the price responsiveness of even small residential consumers.

Despite these results, dynamic pricing and enabling technologies are proliferating slowly in the electricity industry. Proliferation requires a combination of formal and informal institutional change to overcome a variety of barriers. And while formal institutional change (primarily in the form of federal legislation) is reducing some of these barriers, it remains an incremental process. The traditional rate structure, fixed by state regulation and slow to change, presents a substantial barrier. Predetermined load profiles inhibit market-based pricing by ignoring individual customer variation and the information that customers can communicate through choices in response to price signals. Furthermore, the persistence of standard offer service at a discounted rate (that is, a rate that does not reflect the financial cost of insurance against price risk) stifles any incentive customers might have to pursue other pricing options.

The most significant – yet also most intangible and difficult-to-overcome – obstacle to dynamic pricing and enabling technologies is inertia. All of the primary stakeholders in the industry – utilities, regulators and customers – harbor status quo bias. Incumbent utilities face incentives to maintain the regulated status quo as much as possible (given the economic, technological and demographic changes surrounding them) – and thus far, they’ve been successful in using the political process to achieve this objective.

Customer inertia also runs deep because consumers have not had to think about their consumption of electricity or the price they pay for it – a bias consumer advocates generally reinforce by arguing that low, stable prices for highly reliable power are an entitlement. Regulators and customers value the stability and predictability that have arisen from this vertically integrated, historically supply-oriented and reliability-focused environment; however, what is unseen and unaccounted for is the opportunity cost of such predictability – the foregone value creation in innovative services, empowerment of customers to manage their own energy use and use of double-sided markets to enhance market efficiency and network reliability. Compare this unseen potential with the value creation in telecommunications, where even young adults can understand and adapt to cell phone-pricing plans and benefit from the stream of innovations in the industry.

CONCLUSION

The potential for a highly distributed, decentralized network of devices automated to respond to price signals creates new policy and research questions. Do individuals automate sending prices to devices? If so, do they adjust settings, and how? Does the combination of price effects and innovation increase total surplus, including consumer surplus? In aggregate, do these distributed actions create emergent order in the form of system reliability?

Answering these questions requires thinking about the diffuse and private nature of the knowledge embedded in the network, and the extent to which such a network becomes a complex adaptive system. Technology helps determine whether decentralized coordination and emergent order are possible; the dramatic transformation of digital technology in the past few decades has decreased transaction costs and increased the extent of feasible decentralized coordination in this industry. Institutions – which structure and shape the contexts in which such processes occur – provide a means for creating this coordination. And finally, regulatory institutions affect whether or not this coordination can occur.

For this reason, effective regulation should focus not on allocation but rather on decentralized coordination and how to bring it about. This in turn means a focus on market processes, which are adaptive institutions that evolve along with technological change. Regulatory institutions should also be adaptive, and policymakers should view regulatory policy as work in progress so that the institutions can adapt to unknown and changing conditions and enable decentralized coordination.

ENDNOTES

1. Order can take many forms in a complex system like electricity – for example, keeping the lights on (short-term reliability), achieving economic efficiency, optimizing transmission congestion, longer-term resource adequacy and so on.

2. Roger W. Gale, Jean-Louis Poirier, Lynne Kiesling and David Bodde, “The Path to Perfect Power: New Technologies Advance Consumer Control,” Galvin Electricity Initiative report (2007). www.galvinpower.org/resources/galvin.php?id=88

3. The exception to this claim is the TOU contract, where the rate structure is known in advance. However, even on such a simple dynamic pricing contract, devices that allow customers to see their consumption and expenditure in real time instead of waiting for their bill can change behavior.

4. D.J. Hammerstrom et. al, “Pacific Northwest GridWise Testbed Demonstration Projects, volume I: The Olympic Peninsula Project” (2007). http://gridwise.pnl.gov/docs/op_project_final_report_pnnl17167.pdf

5. Ibid.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. Alcatel-Lucent provides solutions that enable service providers, enterprises and governments worldwide to deliver voice, data and video communication services to end users. As a leader in carrier and enterprise IP technologies; fixed, mobile and converged broadband access; applications and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry and includes Bell labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

YOUR ENERGY AND UTILITY PARTNER

Alcatel-Lucent offers comprehensive capabilities that combine carrier-grade communications technology and expertise with utility industry- specific knowledge. Alcatel-Lucent’s IP transformation expertise and utility market-specific knowledge have led to the development of turnkey communications solutions designed for the energy and utility market. Alcatel-Lucent has extensive experience in:

  • Transforming and renewing network technologies;
  • designing and implementing SmartGrid initiatives;
  • Meeting NERC CIP compliance and security requirements;
  • Working in live power generation, transmission and distribution environments;
  • Implementing and managing complex mission-critical communications projects;
  • developing best-in-class partnerships with organizations like CURRENT Communications, Ambient, BelAir networks, Alvarion and others in the utility industry.

Working with Alcatel-Lucent enables energy and utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for – and minimizing the risks associated with – moving to SmartGrid solutions. And Alcatel-Lucent helps energy and utility companies achieve compliance with regulatory requirements and reduce operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services.

ALCATEL-LUCENT IP MPLS SOLUTION FOR THE NEXT-GENERATION UTILITY NETWORK

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain fl awless service delivery. The Alcatel-Lucent IP/MPLS solution can enable utility operators to extend and enhance their networks with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both capital expenditures and operating expenses without jeopardizing reliability. Advanced technologies also allow the introduction of new applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting-edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

THE ALCATEL-LUCENT ADVANTAGE

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation communications networks.

How Intelligent Is Your Grid?

Many people in the utility industry see the intelligent grid — an electric transmission and distribution network that uses information technology to predict and adjust to network changes — as a long-term goal that utilities are still far from achieving. Energy Insights research, however, indicates that today’s grid is more intelligent than people think. In fact, utilities can begin having the network of the future today by better leveraging their existing resources and focusing on the intelligent-grid backbone.

DRIVERS FOR THE INTELLIGENT GRID

Before discussing the intelligent grid backbone, it’s important to understand the drivers directing the intelligent grid’s progress. While many groups — such as government, utilities and technology companies — may be pushing the intelligent grid forward, they are also slowing it down. Here’s how:

  • Government. With the 2005 U.S. Energy Policy Act and the more recent 2007 Energy Independence and Security Act, the federal government has acknowledged the intelligent grid’s importance and is supporting investment in the area. Furthermore, public utility commissions (PUCs) have begun supporting intelligent grid investments like smart metering. At the same time, however, PUCs have a duty to maintain reasonable prices. Since utilities have not extensively tested the benefits of some intelligent grid technologies, such as distribution line sensors, many regulators hesitate to support utilities investing in intelligent grid technologies beyond smart metering.
  • Utilities. Energy Insights research indicates that information technology, in general, enables utilities to increase operational efficiency and reduce costs. For this reason, utilities are open to information technology; however, they’re often looking for quick cost recovery and benefits. Many intelligent grid technologies provide longer-term benefits, making them difficult to cost-justify over the short term. Since utilities are risk-aware, this can make intelligent grid investments look riskier than traditional information technology investments.
  • Technology. Although advanced enough to function on the grid today, many intelligent grid technologies could become quickly outdated thanks to the rapidly developing marketplace. What’s more, the life span of many intelligent grid technologies is not as long as those of traditional grid assets. For example, a smart meter’s typical life span is about 10 to 15 years, compared with 20 to 30 years for an electro-mechanical meter.

With strong drivers and competing pressures like these, it’s not a question of whether the intelligent grid will happen but when utilities will implement new technologies. Given the challenges facing the intelligent grid, the transition will likely be more of an evolution than a revolution. As a result, utilities are making their grids more intelligent today by focusing on the basics, or the intelligent grid backbone.

THE INTELLIGENT GRID BACKBONE

What comprises this backbone? Answering this question requires a closer look at how intelligence changes the grid. Typically, a utility has good visibility into the operation of its generation and transmission infrastructure but poor visibility into its distribution network. As a result, the utility must respond to a changing distribution network based on very limited information. Furthermore, if a grid event requires attention — such as in the case of a transformer failure — people must review information, decide to act and then manually dispatch field crews. This type of approach translates to slower, less informed reactions to grid events.

The intelligent grid changes these reactions through a backbone of technologies — sensors, communication networks and advanced analytics — especially developed for distribution networks. To better understand these changes, we can imagine a scenario where a utility has an outage on its distribution network. As shown in Figure 1, additional grid sensors collect more information, making it easier to detect problems. Communications networks then allow sensors to convey the problem to the utility. Advanced analytics can efficiently process this information and determine more precisely where the fault is located, as well as automatically respond to the problem and dispatch field crews. These components not only enable faster, better-informed reactions to grid problems, they can also do real-time pricing, improve demand response and better handle distributed and renewable energy sources.

A CLOSER LOOK AT BACKBONE COMPONENTS

A deeper dive into each of these intelligent grid backbone technologies reveals how utilities are gaining more intelligence about their grid today.

Network sensors are important not only for real-time operations — such as locating faults and connecting distributed energy sources to the grid — but also for providing a rich historical data source to improve asset maintenance and load research and forecasting. Today, more utilities are using sensors to better monitor their distribution networks; however, they’re focused primarily on smart meters. The reason for this is that smart meters have immediate operational benefits that make them attractive for many utilities today, including reducing meter reader costs, offering accurate billing information, providing theft control and satisfying regulatory requirements. Yet this focus on smart meters has created a monitoring gap between the transmission network and the smart meter.

A slew of sensors are available from companies such as General Electric, ABB, PowerSense, GridSense and Serveron to fill this monitoring gap. Tracking everything from load balancing and transformer status to circuit breakers and tap changers, energized downed lines, high-impedance faults and stray voltage, and more, these sensors are able to fill the monitoring gap, yet utilities hesitate to invest in them because they lack the immediate operational benefits of smart meters.

By monitoring this gap, however, utilities will sustain longer-term grid benefits such as reduced generation capacity building. Utilities have found they can begin monitoring this gap by:

  • Prioritizing sensor investments. Customer complaints and regulatory pressure have pushed some utilities to take action for particular parts of their service territory. For example, one utility Energy Insights studied received numerous customer complaints about a particular feeder’s reliability, so the utility invested in line sensors for that area. Another utility began considering sensor investments in troubled areas of its distribution network when regulators demanded that the utility raise its System Average Interruption Frequency Index (SAIFI) and System Average Interruption Duration Index (SAIDI) ratings from the bottom 50 percent to the top 25 percent of benchmarked utilities. By focusing on such areas, utilities can achieve “quick wins” with sensors and build utility confidence by using additional sensors on their distribution grid.
  • Realizing it’s all about compromise. Even in high-priority areas, it may not make financial sense for a utility to deploy the full range of sensors for every possible asset. In some situations, utilities may target a particular area of the service territory with a higher density of sensors. For example, a large U.S. investor-owned utility with a medium voltage-sensing program placed a high density of sensors along a specific section of its service territory. On the other hand, utilities might cover a broader area of service territory with fewer sensors, similar to the approach taken by a large investor-owned utility Energy Insights looked at that monitored only transformers across its service territory.
  • Rolling in sensors with other intelligent grid initiatives. Some utilities find ways to combine their smart metering projects with other distribution network sensors or to leverage existing investments that could support additional sensors. One utility that Energy Insights looked at installed transformer sensors along with a smart meter initiative and leveraged the communications networks it used for smart metering.

While sensors provide an important means of capturing information about the grid, communication networks are critical to moving that information throughout the intelligent grid — whether between sensors or field crews. Typically, to enable intelligent grid communications, utilities must either build new communications networks to bring intelligence to the existing grid or incorporate communication networks into new construction. Yet utilities today are also leveraging existing or recently installed communications networks to facilitate more sophisticated intelligent grid initiatives such as the following:

  • Smart metering and automated meter-reading (AMR) initiatives. With the current drive to install smart meters, many utilities are covering their distribution networks with communications infrastructure. Furthermore, existing AMR deployments may include communications networks that can bring data back to the utility. Some utilities are taking advantage of these networks to begin plugging other sensors into their distribution networks.
  • Mobile workforce. The deployment of mobile technologies for field crews is another hot area for utilities right now. Utilities are deploying cellular networks for field crew communications for voice and data. Although utilities have typically been hesitant to work with third-party communications providers, they’ve become more comfortable with outside providers after using them for their mobile technologies. Since most of the cellular networks can provide data coverage as well, some utilities are beginning to use these providers to transmit sensor information across their distribution networks.

Since smart metering and mobile communications networks are already in place, the incremental cost of installing sensors on these networks is relatively low. The key is making sure that different sensors and components can plug into these networks easily (for example, using a standard communications protocol).

The last key piece of the intelligent grid backbone is advanced analytics. Utilities are required to make quick decisions every day if they’re to maintain a safe and reliable grid, and the key to making such decisions is being well informed. Intelligent grid analytics can help utilities quickly process large amounts of data from sensors so that they can make those informed decisions. However, how quickly a decision needs to be made depends on the situation. Intelligent grid analytics assist with two types of decisions: very quick decisions (veQuids) and quick decisions (Quids). veQuids are made in milliseconds by computers and intelligent devices analyzing complex, real-time data – an intelligent grid vision that’s still a future development for most utilities.

Fortunately, many proactive decisions about the grid don’t have to be made in milliseconds. Many utilities today can make Quids — often manual decisions — to predict and adjust to network changes within a time frame of minutes, days or even months.

no matter how quick the decision, however, all predictive efforts are based on access to good-quality data. In putting their Quid capabilities to use today — in particular for predictive maintenance and smart metering — utilities are building not only intelligence about their grids but also a foundation for providing more advanced veQuids analytics in the future through the following:

  • The information foundation. Smart metering and predictive maintenance require utilities to collect not only more data but also more real-time data. Smart metering also helps break down barriers between retail and operational data sources, which in turn creates better visibility across many data sources to provide a better understanding of a complex grid.
  • The automation transition. To make the leap between Quids and veQuids requires more than just better access to more information — it also requires automation. While fully automated decision-making is still a thing of the future, many utilities are taking steps to compile and display data automatically as well as do some basic analysis, using dashboards from providers such as OSIsoft and Obvient Strategies to display high-level information customized for individual users. The user then further analyzes the data, and makes decisions and takes action based on that analysis. Many utilities today use the dashboard model to monitor critical assets based on both real-time and historical data.

ENSURING A MORE INTELLIGENT GRID TODAY AND TOMORROW

As these backbone components show, utilities already have some intelligence on their grids. now, they’re building on that intelligence by leveraging existing infrastructure and resources — whether it’s voice communications providers for data transmission or Quid resources to build a foundation for the veQuids of tomorrow. In particular, utilities need to look at:

  • Scalability. Utilities need to make sure that whatever technologies they put on the grid today can grow to accommodate larger portions of the grid in future.
  • Flexibility. Given rapid technology changes in the marketplace, utilities need to make sure their technology is flexible and adaptable. For example, utilities should consider smart meters that have the ability to change out communications cards to allow for new technologies.
  • Integration. due to the evolutionary nature of the grid, and with so many intelligent grid components that must work together (intelligent sensors at substations, transformers and power lines; smart meters; and distributed and renewable energy sources), utilities need to make sure these disparate components can work with one another. Utilities need to consider how to introduce more flexibility into their intelligent grids to accommodate the increasingly complex network of devices.

As today’s utilities employ targeted efforts to build intelligence about the grid, they must keep in mind that whatever action they take today – no matter how small – must ultimately help them meet the demands of tomorrow.

Delivering the Tools for Creating the Next-Generation Electrical SmartGrid

PowerSense delivers cutting-edge monitoring and control equipment together with integrated supervision to enable the modern electrical utility to prepare its existing power infrastructure for tomorrow’s SmartGrid.

PowerSense uses world-leading technology to merge existing and new power infrastructures into the existing SCADA and IT systems of the electrical utilities. This integration of the upgraded power infrastructure and existing IT systems instantly optimizes outage and fault management, thereby decreasing customer minutes lost (the System Average Interruption duration Index, or SAIDI).

At the same time, this integration helps the electrical utility further improve asset management (resulting in major cost savings) and power management (resulting in high-performance outage management and a high power efficiency). The PowerSense product line is called DISCOS® (for distribution networks, Integrated Supervision and Control System).

Discos®

The following outlines the business and system values offered by the DISCOS® product line.

Business Values

  • Cutting-edge optical technology (the sensor)
  • Easily and safely retrofitted (sensors can be fitted into all transformer types)
  • End-to-end solutions (from sensors to laptop)
  • Installation in steps (implementation based on cost-benefit analysis) system Values
  • Current (for each phase)
  • Voltage (for each phase)
  • Frequency
  • Power active, reactive and direction
  • Distance-to-fault measurement
  • Control of breakers and service relays
  • Analog inputs
  • Measurement of harmonic content for I and V
  • Measurement of earth fault

These parameters are available for both medium- and low-voltage power lines.

OPTICAL SENSOR TECHNOLOGY

With its stability and linearity, PowerSense’s cutting-edge sensor technology is setting new standards for current measurements in general. For PowerSense’s primary business area of MV grid monitoring in particular, it is creating a completely new set of standards for how to monitor the MV power grid.

The DISCOS® Current Sensor is part of the DISCOS® Opti module. The DISCOS® Sensor monitors the current size and angle on both the LV and MV side of the transformer.

BASED ON THE FARADAY EFFECT

Today, only a few applications in measuring instruments are based on the Faraday rotation principle. For instance, the Faraday effect has been used for measuring optical rotary power, for amplitude modulation of light and for remote sensing of magnetic fields.

now, due to advanced computing techniques, PowerSense is able to offer a low-priced optical sensor based on the Faraday effect.

THE COMPANY

PowerSense A/S was established on September 1, 2006, by DONG Energy A/S (formerly Nesa A/S) as a spin-off of the DISCOS® product line business. The purpose of the spin-off was to ensure the best future business conditions for the DISCOS® product line.

After the spin-off, BankInvest A/S, a Danish investment bank, holds 70 percent of the share capital. DONG Energy A/S continues to hold 30 percent of the share capital.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!