The Role of Telecommunications Providers in the Smart Grid

Utilities are facing a host of critical issues over the next 10 years. One of the major approaches to dealing with these challenges is for utilities to become much more "intelligent" through the development of Intelligent Utility Enterprises (IUE) and Smart Grids (SG). The IUE/SG will require ubiquitous communications systems throughout utility service territories, especially as automated metering infrastructure (AMI) becomes a reality. Wireless systems, such as the widespread cellular system AT&T and other public carriers already have, will play a major role in enabling these systems.

These communications must be two-way, all the way from the utility to individual homes. The Smart Grid will be a subset of the intelligent utility, enabling utility executives to make wise decisions to deal with the pending issues. Public carriers are currently positioned to support and provide a wide range of communications technologies and services such as WiFi, satellite and cellular, which it is continuing to develop to meet current and future utility needs.

Supply and demand reaching critical concern

Utilities face some formidable mountains in the near future and they must climb these in the crosshairs of regulatory, legislative and public scrutiny. Included are such things as a looming, increasing shortage of electricity which may become more critical as global warming concerns begin to compromise the ability to build large generating plants, especially those fueled by coal.

Utilities also have to contend with the growing political strength of an environmental movement that opposes most forms of generation other than those designated as "green energy." Thus, utilities face a political/legislative/regulatory perfect storm, on the one hand reducing their ability to generate electricity by conventional methods and, on the other, requiring levels of reliability they increasingly are finding it impossible to meet.

The Intelligent Utility Enterprise and Smart Grid, with AMI as a subset of the Smart Grid, as potential, partial solutions

The primary solution proposed to date, which utilities can embrace on their own without waiting for regulatory/legislative/ political clarity, is to use technology like IUEs to become much more effective organizations and to substitute intelligence in lieu of manpower with SGs. The Smart Grid evolution also will enable the general public to take part in solving these problems through demand response. A subset of that evolution will be outage management to ensure that outages are anticipated and, except where required by supply shortages, minimized rapidly and effectively.

The IUE/SG, for the first time, will enable utility executives to see exactly what is happening on the grid in real time, so they can make the critical, day-to-day decisions in an environment of increasingly high prices and diminished supply for electricity.

Wireless To Play A Major Role In Required Ubiquitous Communications

Automating the self-operating, self-healing grid – artificial intelligence

The IUE/SG obviously will require enterprise-wide digital communications to enable the rapid transfer of data between one system and another, all the way from smart meters and other in-home gateways to the boardrooms where critical decisions will be made. Already utilities have embraced service-oriented architecture (SOA), as a means of linking everything together. SOA-enabled systems are easily linked over IP, which is capable of operating over traditional wire and fiber optic communications systems, which many utilities have in place, as well as existing cellular wireless systems. Wireless communications are becoming more helpful in linking disparate systems from the home, through the distribution systems, to substations, control rooms and beyond to the enterprise. The ubiquitous utility communications of the future will integrate a wide range of systems, some of them owned by the utilities and others leased and contracted by various carriers.

The Smart Grid is a subset of the entire utility enterprise and is linked to the boardroom by various increasingly intelligent systems throughout.

Utility leadership will need vital information about the operation of the grid all the way into the home, where distributed generation, net billing, demand response reduction of voltage or current will take place. This communications network must be in real time and must provide information to all of what traditionally were called "back office" systems, but which now must be capable of collating information never before received or considered.

The distribution grid itself will have to become much more automated, self-healing, and self-operating through artificial intelligence. Traditional SCADA (supervisory control and data acquisition) will have to become more capable, and the data it collects will have to be pushed further up into the utility enterprise and to departments that have not previously dealt with real-time data.

The communications infrastructure In the past utilities typically owned much of their communications systems. Most of these systems are aged, and converting them to modern digital systems is difficult and expensive.

Utilities are likely to embrace a wide range of new and existing communications technologies as they grapple with their supply/demand disconnect problem. One of these is IP/MPLS (Internet Protocol/Multi Protocol Label Switching), which already is proven in utility communications networks as well as other industries which require mission critical communications. MPLS is used to make communications more reliable and provide the prioritization to ensure the required latency for specific traffic.

One of the advantages offered by public carriers is that their networks have almost ubiquitous coverage of utility service territories, as well as built-in switching capabilities. They also have been built to communications standards that, while still evolving, help ensure important levels of security and interoperability.

"Cellular network providers are investing billions of dollars in their networks," points out Henry L. Jones II, chief technology officer at SmartSynch, an AMI vendor and author of the article entitled "Want six billion dollars to invest in your AMI network?"

"AT&T alone will be spending 16-17 billion dollars in 2009," Jones notes. "Those investments are spent efficiently in a highly competitive environment to deliver high-speed connectivity anywhere that people live and work. Of course, the primary intent of these funds is to support mobile users with web browsing and e-mail. Communicating with meters is a much simpler proposition, and one can rely on these consumer applications to provide real-world evidence that scalability to system-wide AMI will not be a problem."

Utilities deal in privileged communications with their customers, and their systems are vulnerable to terrorism. As a result, Congress, through the Federal Energy Regulatory Authority (FERC), designated NERC as the agency responsible for ensuring security of all utility facilities, including communications.

As an example of meeting security needs at a major utility, AT&T is providing redundant communications systems over a wireless WAN for a utility’s 950 substations, according to Andrew Hebert, AT&T Area Vice President, Industry Solutions Mobility Practice. This enables the utility to meet critical infrastructure protection standards and "harden" its SCADA and distribution automation systems by providing redundant communications pathways.

SCADA communication, distributed automation, and even devices providing artificial intelligence reporting are possible with today’s modern communications systems. Latency is important in terms of automatic fault reporting and switching. The communications network must provide the delivery-time performance to this support substation automation as identified in IEEE 1646. Some wireless systems now offer latencies in the 125ms range. Some of the newer systems are designed for no more than 50ms latency.

As AMI becomes more widespread, the utility- side control of millions of in-home and in-business devices will have to be controlled and managed. Meter readings must be collected and routed to meter data management systems. While it is possible to feed all this data directly to some central location, it is likely that this data avalanche will be routed through substations for aggregation and handling and transfer to corporate WANs. As the number of meter points grows – and the number readings taken per hour and the number of in-home control signals increases, bandwidth and latency factors will have to be considered carefully.

Public cellular carriers already have interoperability (e.g., you can call someone on a cell phone although they use a different carrier), and it is likely that there will be more standardization of communications systems going forward. A paradigm shift toward national and international communications interoperability already has occurred – for example, with the global GSM standard on which the AT&T network is based. A similar shift in the communications systems utilities use is necessary and likely to come about in the next few years. It no longer is practical for utilities to have to cobble together communications with varying standards for different portions of their service territory, or different functional purposes.

Managing Communications Change

Change is being forced upon the utilities industry. Business drivers range from stakeholder pressure for greater efficiency to the changing technologies involved in operational energy networks. New technologies such as intelligent networks or smart grids, distribution automation or smart metering are being considered.

The communications network is becoming the key enabler for the evolution of reliable energy supply. However, few utilities today have a communications network that is robust enough to handle and support the exacting demands that energy delivery is now making.

It is this process of change – including the renewal of the communications network – that is vital for each utility’s future. But for the utility, this is a technological step change requiring different strategies and designs. It also requires new skills, all of which have been implemented in timescales that do not sit comfortably with traditional technology strategies.

The problems facing today’s utility include understanding the new technologies and assessing their capabilities and applications. In addition, the utility has to develop an appropriate strategy to migrate legacy technologies and integrate them with the new infrastructure in a seamless, efficient, safe and reliable manner.

This paper highlights the benefits utilities can realize by adopting a new approach to their customers’ needs and engaging a network partner that will take responsibility for the network upgrade, its renewal and evolution, and the service transition.

The Move to Smart Grids

The intent of smart grids is to provide better efficiency in the production, transport and delivery of energy. This is realized in two ways:

  • Better real-time control: ability to remotely monitor and measure energy flows more closely, and then manage those flows and the assets carrying them in real time.
  • Better predictive management: ability to monitor the condition of the different elements of the network, predict failure and direct maintenance. The focus is on being proactive to real needs prior to a potential incident, rather than being reactive to incidents, or performing maintenance on a repetitive basis whether it is needed or not.

These mechanisms imply more measurement points, remote monitoring and management capabilities than exist today. And this requires a greater reliance on reliable, robust, highly available communications than has ever been the case before.

The communications network must continue to support operational services independently of external events, such as power outages or public service provider failure, yet be economical and simple to maintain. Unfortunately, the majority of today’s utility communications implementations fall far short of these stringent requirements.

Changing Environment

The design template for the majority of today’s energy infrastructure was developed in the 1950s and 1960s – and the same is true of the associated communications networks.

Typically, these communications networks have evolved into a series of overlays, often of different technology types and generations (see Figure 1). For example, protection tends to use its own dedicated network. The physical realization varies widely, from tones over copper via dedicated time division multiplexing (TDM) connections to dedicated fiber connections. These generally use a mix of privately owned and leased services.

Supervisory control and data acquisitions systems (SCADA) generally still use modem technology at speeds between 300 baud to 9.6k baud. Again, the infrastructure is often copper or TDM running as one of many separate overlay networks.

Lastly, operational voice services (as opposed to business voice services) are frequently analog on yet another separate network.

Historically, there were good operational reasons for these overlays. But changes in device technology (for example, the evolution toward e-SCADA based on IP protocols), as well as the decreasing support by communications equipment vendors of legacy communications technologies, means that the strategy for these networks has to be reassessed. In addition, the increasing demand for further operational applications (for example, condition monitoring, or CCTV, both to support substation automation) requires a more up-to-date networking approach.

Tomorrow’s Network

With the exception of protection services, communications between network devices and the network control centers are evolving toward IP-based networks (see Figure 2). The benefits of this simplified infrastructure are significant and can be measured in terms of asset utilization, reduced capital and operational costs, ease of operation, and the flexibility to adapt to new applications. Consequently, utilities will find themselves forced to seriously consider the shift to a modern, homogeneous communications infrastructure to support their critical operational services.

Organizing For Change

As noted above, there are many cogent reasons to transform utility communications to a modern, robust communications infrastructure in support of operational safety, reliability and efficiency. However, some significant considerations should be addressed to achieve this transformation:

Network Strategy. It is almost inevitable that a new infrastructure will cross traditional operational and departmental boundaries within the utility. Each operational department will have its own priorities and requirements for such a network, and traditionally, each wants some, or total, control. However, to achieve real benefits, a greater degree of centralized strategy and management is required.

Architecture and Design. The new network will require careful engineering to ensure that it meets the performance-critical requirements of energy operations. It must maintain or enhance the safety and reliability of the energy network, as well as support the traffic requirements of other departments.

Planning, Execution and Migration. Planning and implementation of the core infrastructure is just the start of the process. Each service requires its own migration plan and has its own migration priorities. Each element requires specialist technical knowledge, and for preference, practical field experience.

Operation. Gone are the days when a communications failure was rectified by sending an engineer into the field to find the fault and to fix it. Maintaining network availability and robustness calls for sound operational processes and excellent diagnostics before any engineer or technician hits the road. The same level of robust centralized management tools and processes that support the energy networks have to be put in place to support communications network – no matter what technologies are used in the field.

Support. Although these technologies are well understood by the telecommunications industry, they are likely to be new to the energy utilities industry. This means that a solid support organization familiar with these technologies must be implemented. The evolution process requires an intense level of up-front skills and resources. Often these are not readily available in-house – certainly not in the volume required to make any network renewal or transformation effective. Building up this skill and resource base by recruitment will not necessarily yield staff that is aware of the peculiarities of the energy utilities market. As a result, there will be significant time lag from concept to execution, and considerable risk for the utility as it ventures alone into unknown territory.

Keys To Successful Engagement

Engaging a services partner does not mean ceding control through a rigid contract. Rather, it means crafting a flexible relationship that takes into consideration three factors: What is the desired outcome of the activity? What is the best balance of scope between partner assistance and in-house performance to achieve that outcome? How do you retain the flexibility to accommodate change while retaining control?

Desired outcome is probably the most critical element and must be well understood at the outset. For one utility, the desired outcome may be to rapidly enable the upgrade of the complete energy infrastructure without having to incur the upfront investment in a mass recruitment of the required new communications skills.

For other utilities, the desired outcome may be different. But if the outcomes include elements of time pressure, new skills and resources, and/or network transformation, then engaging a services partner should be seriously considered as one of the strategic options.

Second, not all activities have to be in scope. The objective of the exercise might be to supplement existing in-house capabilities with external expertise. Or, it might be to launch the activity while building up appropriate in-house resources in a measured fashion through the Build-Operate- Transfer (BOT) approach.

In looking for a suitable partner, the utility seeks to leverage not only the partner’s existing skills, but also its experience and lessons learned performing the same services for other utilities. Having a few bruises is not a bad thing – this means that the partner understands what is at stake and the range of potential pitfalls it may encounter.

Lastly, retaining flexibility and control is a function of the contract between the two parties which should be addressed in their earliest discussions. The idea is to put in place the necessary management framework and a robust change control mechanism based on a discussion between equals from both organizations. The utility will then find that it not only retains full control of the project without having to take day-to-day responsibility for its management, but also that it can respond to change drivers from a variety of sources – such as technology advances, business drivers, regulators and stakeholders.

Realizing the Benefits

Outsourcing or partnering the communications transformation will yield benefits, both tangible and intangible. It must be remembered that there is no standard “one-size-fits-all” outsourcing product. Thus, the benefits accrued will depend on the details of the engagement.

There are distinct tangible benefits that can be realized, including:

Skills and Resources. A unique benefit of outsourcing is that it eliminates the need to recruit skills not available internally. These are provided by the partner on an as-needed basis. The additional advantage for the utility is that it does not have to bear the fixed costs once they are no longer required.

Offset Risks. Because the partner is responsible for delivery, the utility is able to mitigate risk. For example, traditionally vendors are not motivated to do anything other than deliver boxes on time. But with a well-structured partnership, there is an incentive to ensure that the strategy and design are optimized to economically deliver the required services and ease of operation. Through an appropriate regime of business-related key performance indicators (KPIs), there is a strong financial incentive for the partner to operate and upgrade the network to maintain peak performance – something that does not exist when an in-house organization is used.

Economies of Scale. Outsourcing can bring the economies of scale resulting from synergies together with other parts of the partner’s business, such as contracts and internal projects.

There also are many other benefits associated with outsourcing that are not as immediately obvious and commercially quantifiable as those listed above, but can be equally valuable.

Some of these less tangible benefits include:

Fresh Point of View. Within most companies, employees often have a vested interest in maintaining the status quo. But a managed services organization has a vested interest in delivering the best possible service to the customer – a paradigm shift in attitude that enables dramatic improvements in performance and creativity.

Drive to Achieve Optimum Efficiency. Executives, freed from the day-to-day business of running the network, can focus on their core activities, concentrating on service excellence rather than complex technology decisions. To quote one customer, “From my perspective, a large amount of my time that might have in the past been dedicated to networking issues is now focused on more strategic initiatives concerned with running my business more effectively.”

Processes and Technologies Optimization. Optimizing processes and technologies to improve contract performance is part of the managed services package and can yield substantial savings.

Synergies with Existing Activities Create Economies of Scale. A utility and a managed services vendor have considerable overlap in the functions performed within their communications engineering, operations and maintenance activities. For example, a multi-skilled field force can install and maintain communications equipment belonging to a variety of customers. This not only provides cost savings from synergies with the equivalent customer activity, but also an improved fault response due to the higher density of deployed staff.

Access to Global Best Practices. An outsourcing contract relieves a utility of the time-consuming and difficult responsibility of keeping up to speed with the latest thinking and developments in technology. Alcatel-Lucent, for example, invests around 14 percent of its annual revenue into research and development; its customers don’t have to.

What Can Be Outsourced?

There is no one outsourcing solution that fits all utilities. The final scope of any project will be entirely dependent on a utility’s specific vision and current circumstances.

The following list briefly describes some of the functions and activities that are good possibilities for outsourcing:

Communications Strategy Consulting. Before making technology choices, the energy utility needs to define the operational strategy of the communications network. Too often communications is viewed as “plug and play,” which is hardly ever the case. A well-thought-out communications strategy will deliver this kind of seamless operation. But without that initial strategy, the utility risks repeating past mistakes and acquiring an ad-hoc network that will rapidly become a legacy infrastructure, which will, in turn, need replacing.

Design. Outsourcing allows utilities to evolve their communications infrastructure without upfront investment in incremental resources and skills. It can delegate responsibility for defining network architecture and the associated network support systems. A utility may elect to leave all technological decisions to the vendor and merely review progress and outcomes. Or, it may retain responsibility for technology strategy, and turn to the managed services vendor to turn the strategy into architecture and manage the subsequent design and project activities.

Build. Detailed planning of the network, the rollout project and the delivery of turnkey implementations all fall within the scope of the outsourcing process.

Operate, Administer and Maintain. Includes network operations and field and support services:

  • Network Operations. A vendor such as Alcatel-Lucent has the necessary experience in operating Network Operations Centers (NOCs), both on a BOT and ongoing basis. This includes handling all associated tasks such as performance and fault monitoring, and services management.
  • Network and Customer Field Services. Today, few energy utilities consider outside maintenance and provisioning activities to be a strategic part of their business and recognize they are prime candidates for outsourcing. Activities that can be outsourced include corrective and preventive maintenance, network and service provisioning, and spare parts management, return and repair – in other words, all the daily, time-consuming, but vitally important elements for running a reliable network.
  • Network Support Services. Behind the first-line activities of the NOC are a set of engineering support functions that assist with more complex faults – these are functions that cannot be automated and tend to duplicate those of the vendor’s. The integration and sharing of these functions enabled by outsourcing can significantly improve the utility’s efficiency.

Conclusion

Outsourcing can deliver significant benefits to a utility, both in terms of its ability to invest in and improve its operation and associated costs. However, each utility has its own unique circumstances, specific immediate needs, and vision of where it is going. Therefore, each technical and operational solution is different.

Alcatel-Lucent Your Smart Grid Partner

Alcatel-Lucent offers comprehensive capabilities that combine Utility industry – specific knowledge and experience with carrier – grade communications technology and expertise. Our IP/MPLS Transformation capabilities and Utility market – specific knowledge are the foundation of turnkey solutions designed to enable Smart Grid and Smart Metering initiatives. In addition, Alcatel-Lucent has specifically developed Smart Grid and Smart Metering applications and solutions that:

  • Improve the availability, reliability and resiliency of critical voice and data communications even during outages
  • Enable optimal use of network and grid devices by setting priorities for communications traffic according to business requirements
  • Meet NERC CIP compliance and cybersecurity requirements
  • Improve the physical security and access control mechanism for substations, generation facilities and other critical sites
  • Offer a flexible and scalable network to grow with the demands and bandwidth requirements of new network service applications
  • Provide secure web access for customers to view account, electricity usage and billing information
  • Improve customer service and experience by integrating billing and account information with IP-based, multi-channel client service platforms
  • Reduce carbon emissions and increase efficiency by lowering communications infrastructure power consumption by as much as 58 percent

Working with Alcatel-Lucent enables Energy and Utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for, and minimizing the risks associated with, moving to Smart Grid solutions. And Alcatel-Lucent helps Energy and Utility companies achieve compliance with regulatory requirements and reductions in operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services. We build Smart Networks to support the Smart Grid.

American Recovery and Reinvestment Act of 2009 Support from Alcatel-Lucent

The American Recovery and Reinvestment Act (ARRA) of 2009 was adopted by Congress in February 2009 and allocates $4.5 billion to the Department of Energy (DoE) for Smart Grid deployment initiatives. As a result of the ARRA, the DoE has established a process for awarding the $4.5 billion via investment grants for Smart Grid Research and Development, and Deployment projects. Alcatel-Lucent is uniquely qualified to help utilities take advantage of the ARRA Smart Grid funding. In addition to world-class technology and Smart Grid and Smart Metering solutions, Alcatel-Lucent offers turnkey assistance in the preparation of grant applications, and subsequent follow-up and advocacy with federal agencies. Partnership with Alcatel-Lucent on ARRA includes:

  • Design Implementation and support for a Smart Grid Network
  • Identification of all standardized and unique elements of each grant program
  • Preparation and Compilation of all required grant application components, such as project narratives, budget formation, market surveys, mapping, and all other documentation required for completion
  • Advocacy at federal, state, and local government levels to firmly establish the value proposition of a proposal and advance it through the entire process to ensure the maximum opportunity for success

Alcatel-Lucent is a Recognized Leader in the Energy and Utilities Market

Alcatel-Lucent is an active and involved leader in the Energy and Utility market, with active membership and leadership roles in key Utility industry associations, including the Utility Telecom Council (UTC), the American Public Power Association (APPA), and Gridwise. Gridwise is an association of Utilities, industry research organizations (e.g., EPRI, Pacific Northwest National Labs, etc.), and Utility vendors, working in cooperation with DOE to promote Smart Grid policy, regulatory issues, and technologies (see www.gridwise.org for more info). Alcatel-Lucent is also represented on the Board of Directors for UTC’s Smart Network Council, which was established in 2008 to promote and develop Smart Grid policies, guidelines, and recommended technologies and strategies for Smart Grid solution implementation.

Alcatel-Lucent IP MPLS Solution for the Next Generation Utility Network

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain flawless service delivery. The Alcatel-Lucent IP/MPLS solution can enable the utility operator to extend and enhance its network with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both CAPEX and OPEX without jeopardizing reliability. Advanced technologies also allow the introduction of new Smart Grid applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation utility networks.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. As a leader in utility, enterprise and carrier IP technologies, fixed, mobile and converged broadband access, applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry, and Bell Labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

Successful Smart Grid Architecture

The smart grid is progressing well on several fronts. Groups such as the Grid Wise Alliance, events such as Grid Week, and national policy citations such as the American Recovery and Reinvestment Act in the U.S., for example, have all brought more positive attention to this opportunity. The boom in distributed renewable energy and its demands for a bidirectional grid are driving the need forward, as are sentiments for improving consumer control and awareness, giving customers the ability to engage in real-time energy conservation.

On the technology front, advances in wireless and other data communications make wide-area sensor networks more feasible. Distributed computation is certainly more powerful – just consider your iPod! Even architectural issues such as interoperability are now being addressed in their own forums such as Grid Inter-Op. It seems that the recipe for a smart grid is coming together in a way that many who envisioned it would be proud. But to avoid making a gooey mess in the oven, an overall architecture that carefully considers seven key ingredients for success must first exist.

Sources of Data

Utilities have eons of operational data: both real time and archival, both static (such as nodal diagrams within distribution management systems) and dynamic (such as switching orders). There is a wealth of information generated by field crews, and from root-cause analyses of past system failures. Advanced metering infrastructure (AMI) implementations become a fine-grained distribution sensor network feeding communication aggregation systems such as Silver Springs Network’s Utility IQ or Trilliant’s Secure Mesh Network.

These data sources need to be architected to be available to enhance, support and provide context for real-time data coming in from new intelligent electronic devices (IEDs) and other smart grid devices. In an era of renewable energy sources, grid connection controllers become yet another data source. With renewables, micro-scale weather forecasting such as IBM Research’s Deep Thunder can provide valuable context for grid operation.

Data Models

Once data is obtained, in order to preserve its value in a standard format, one can think in terms of an extensible markup language (XML)-oriented database. Modern implementations of these databases have improved performance characteristics, and the International Engineering Consortium (IEC) common information/ generic interface definition (CIM/GID) model, though oriented more to assets than operations, is a front-running candidate for consideration.

Newer entries, such as device language message specification – coincidence-ordered subsets expectation maximization (DLMS-COSEM) for AMI, are also coming into practice. Sometimes, more important than the technical implementation of the data, however, is the model that is employed. A well-designed data model not only makes exchange of data and legacy program adjustments easier, but it can also help the applicability of security and performance requirements. The existence of data models is often a good indicator of an intact governance process, for it facilitates use of the data by multiple applications.

Communications

Customer workshops and blueprinting sessions have shown that one of the most common issues needing to be addressed is the design of the wide-area communication system. Data communications architecture affects data rate performance, the cost of distributed intelligence and the identification of security susceptibilities.

There is no single communications technology that is suitable for all utilities, or even for all operational areas across any individual utility. Rural areas may be served by broadband over powerline (BPL), while urban areas benefit from multi-protocol label switching (MPLS) and purpose- designed mesh networks, enhanced by their proximity to fiber.

In the future, there could be entirely new choices in communications. So, the smart grid architect needs to focus on security, standardized interfaces to accept new technology, enablement of remote configuration of devices to minimize any touching of smart grid devices once installed, and future-proofing the protocols.

The architecture should also be traceable to the business case. This needs to include probable use cases that may not be in the PUC filing, such as AMI now, but smart grid later. Few utilities will be pleased with the idea of a communication network rebuild within five years of deploying an AMI-only network.

Communications architecture must also consider power outages, so battery backup, solar recharging, or other equipment may be required. Even arcane details such as “Will the antenna on a wireless device be the first thing to blow off in a hurricane?” need to be considered.

Security

Certainly, the smart grid’s purpose is to enhance network reliability, not lower its security. But with the advent of North American Reliability Corp. Critical Infrastructure Protection (NERC-CIP), security has risen to become a prime consideration, usually addressed in phase one of the smart grid architecture.

Unlike the data center, field-deployed security has many new situations and challenges. There is security at the substation – for example, who can access what networks, and when, within the control center. At the other end, security of the meter data in a proprietary AMI system needs to be addressed so that only authorized applications and personnel can access the data.

Service oriented architecture (SOA) appliances are network devices to enable integration and help provide security at the Web services message level. These typically include an integration device, which streamlines SOA infrastructures; an XML accelerator, which offloads XML processing; and an XML security gateway, which helps provide message-level, Web-services security. A security gateway helps to ensure that only authorized applications are allowed to access the data, whether an IP meter or an IED. SOA appliance security features complement the SOA security management capabilities of software.

Proper architectures could address dynamic, trusted virtual security domains, and be combined not only with intrusion protection systems, but anomaly detection systems. If hackers can introduce viruses in data (such as malformed video images that leverage faults in media players), then similar concerns should be under discussion with smart grid data. Is messing with 300 MegaWatts (MW) of demand response much different than cyber attacking a 300 MW generator?

Analytics

A smart grid cynic might say, “Who is going to look at all of this new data?” That is where analytics supports the processing, interpretation and correlation of the flood of new grid observations. One part of the analytics would be performed by existing applications. This is where data models and integration play a key role. Another part of the analytics dimension is with new applications and the ability of engineers to use a workbench to create their customized analytics dashboard in a self-service model.

Many utilities have power system engineers in a back office using spreadsheets; part of the smart grid concept is that all data is available to the community to use modern tools to analyze and predict grid operation. Analytics may need a dedicated data bus, separate from an enterprise service bus (ESB) or enterprise SOA bus, to meet the timeliness and quality of service to support operational analytics.

A two-tier or three-tier (if one considers the substations) bus is an architectural approach to segregate data by speed and still maintain interconnections that support a holistic view of the operation. Connections to standard industry tools such as ABB’s NEPLAN® or Siemens Power Technologies International PSS®E, or general tools such as MatLab, should be considered at design time, rather than as an additional expense commitment after smart grid commissioning.

Integration

Once data is sensed, securely communicated, modeled and analyzed, the results need to be applied for business optimization. This means new smart grid data gets integrated with existing applications, and metadata locked in legacy systems is made available to provide meaningful context.

This is typically accomplished by enabling systems as services per the classic SOA model. However, issues of common data formats, data integrity and name services must be considered. Data integrity includes verification and cross-correlation of information for validity, and designation of authoritative sources and specific personnel who own the data.

Name services addresses the common issue of an asset – whether transformer or truck – having multiple names in multiple systems. An example might be a substation that has a location name, such as Walden; a geographic information system (GIS) identifier such as latitude and longitude; a map name such as nearest cross streets; a capital asset number in the financial system; a logical name in the distribution system topology; an abbreviated logical name to fit in the distribution management system graphical user interface (DMS GUI); and an IP address for the main network router in the substation.

Different applications may know new data by association with one of those names, and that name may need translation to be used in a query with another application. While rewriting the applications to a common model may seem appealing, it may very well send a CIO into shock. While the smart grid should help propagate intelligence throughout the utility, this doesn’t necessarily mean to replace everything, but it should “information-enable” everything.

Interoperability is essential at both a service level and at the application level. Some vendors focus more at the service, but consider, for example, making a cell phone call from the U.S. to France – your voice data may well be code division multiple access (CDMA) in the U.S., travel by microwave and fiber along its path, and emerge in France in a global system for mobile (GSM) environment, yet your speech, the “application level data,” is retained transparently (though technology does not yet address accents!).

Hardware

The world of computerized solutions does not speak to software alone. For instance, AMI storage consolidation addresses the concern that the volume of data coming into the utility will be increasing exponentially. As more meter data can be read in an on-demand fashion, data analytics will be employed to properly understand it all, requiring a sound hardware architecture to manage, back-up and feed the data into the analytics engines. In particular, storage is needed in the head-end systems and the meter-data management systems (MDMS).

Head-end systems pull data from the meters to provide management functionality while the MDMS collects data from head-end systems and validates it. Then the data can be used by billing and other business applications. Data in both the head-end systems and the master copy of the MDMS is replicated into multiple copies for full back up and disaster recovery. For MDMS, the master database that stores all the aggregated data is replicated for other business applications, such as customer portal or data analytics, so that the master copy of the data is not tampered with.

Since smart grid is essentially performing in real time, and the electricity business is non-stop, one must think of hardware and software solutions as needing to be fail-safe with automated redundancy. The AMI data especially needs to be reliable. The key factors then become: operating system stability; hardware true memory access speed and range; server and power supply reliability; file system redundancy such as a JFS; and techniques such as FlashCopy to provide a point-in-time copy of a logical drive.

Flash Copy can be useful in speeding up database hot backups and restore. VolumeCopy can extend the replication functionality by providing the ability to copy contents of one volume to another. Enhanced remote mirroring (Global Mirror, Global Copy and Metro Mirror) can provide the ability to mirror data from one storage system to another, over extended distances.

Conclusion

Those are seven key ingredients for designing or evaluating a recipe for success with regard to implementing the smart grid at your utility. Addressing these dimensions will help achieve a solid foundation for a comprehensive smart grid computing system architecture.

Empowering the Smart Grid

Trilliant is the leader in delivering intelligent networks that power the smart grid. Trilliant provides hardware, software and service solutions that deliver on the promise of Advanced Metering and Smart Grid to utilities and their customers, including improved energy efficiency, grid reliability, lower operating cost, and integration of renewable energy resources.

Since its founding in 1985, the company has been a leading innovator in the delivery and implementation of advanced metering infrastructure (AMI), demand response and grid management solutions, in addition to installation, program management and meter revenue cycle services. Trilliant is focused on enabling choice for utility companies, ranging from meter, network and IT infrastructures to full or hybrid outsource models.

Solutions

Trilliant provides fully automated, two-way wireless network solutions and software for smart grid applications. The company’s smart grid communications solutions enable utilities to create a more efficient and robust operational infrastructure to:

  • Read meters on demand with five minute or less intervals;
  • Improve cash flow;
  • Improve customer service;
  • Decrease issue resolution time;
  • Verify outages and restoration in real time;
  • Monitor substation equipment;
  • Perform on/off cycle reads;
  • Conduct remote connect/disconnect;
  • Significantly reduce/eliminate energy theft through tamper detection; and
  • Realize accounting/billing improvements.

Trilliant solutions also enable the introduction of services and programs such as:

  • Dynamic demand response; and
  • Time-of-use (TOU), critical peak pricing (CPP) and other special tariffs and related metering.

Solid Customer Base

Trilliant has secured contracts for more than three million meters to be supported by its network solutions and services, encompassing both C&I and residential applications. The company has delivered products and services to more than 200 utility customers, including Duke Energy, E.ON US (Louisville Gas & Electric), Hydro One, Hydro Quebec, Jamaica Public Service Company Ltd., Milton Hydro, Northeast Utilities, PowerStream, Public Service Gas & Electric, San Diego Gas & Electric, Toronto Hydro Electric System Ltd., and Union Gas, among others.

SmartGridNet Architecture for Utilities

With the accelerating movement toward distributed generation and the rapid shift in energy consumption patterns, today’s power utilities are facing growing requirements for improved management, capacity planning, control, security and administration of their infrastructure and services.

UTILITY NETWORK BUSINESS DRIVERS

These requirements are driving a need for greater automation and control throughout the power infrastructure, from generation through the customer site. In addition, utilities are interested in providing end-customers with new applications, such as advanced metering infrastructure (AMI), online usage reports and outage status. In addition to meeting these requirements, utilities are under pressure to reduce costs and automate operations, as well as protect their infrastructures from service disruption in compliance with homeland security requirements.

To succeed, utilities must seamlessly support these demands with an embedded infrastructure of traditional devices and technologies. This will allow them to provide a smooth evolution to next-generation capabilities, manage life cycle issues for aging equipment and devices, maintain service continuity, minimize capital investment, and ensure scalability and future-proofing for new applications, such as smart metering.

By adopting an evolutionary approach to an intelligent communications network (SmartGridNet), utilities can maximize their ability to leverage the existing asset base and minimize capital and operations expenses.

THE NEED FOR AN INTELLIGENT UTILITY NETWORK

As a first step toward implementing a SmartGridNet, utilities must implement intelligent electronic devices (IEDs) throughout the infrastructure – from generation and transmission through distribution directly to customer premises – if they are to effectively monitor and manage facilities, load and usage. A sophisticated operational communications network then interconnects such devices through control centers, providing support for supervisory control and data acquisition (SCADA), teleprotection, remote meter reading, and operational voice and video. This network also enables new applications such as field personnel management and dispatch, safety and localization. In addition, the utility’s corporate communications network increases employee productivity and improves customer service by providing multimedia; voice, video, and data communications; worker mobility; and contact center capabilities.

These two network types – operational and corporate – and the applications they support may leverage common network facilities; however, they have very different requirements for availability, service assurance, bandwidth, security and performance.

SMARTGRIDNET REQUIREMENTS

Network technology is critical to the evolution of the next-generation utility. The SmartGridNet must support the following key requirements:

  • Virtualization. Enables operation of multiple virtual networks over common infrastructure and facilities while maintaining mutual isolation and distinct levels of service.
  • Quality of service (QoS). Allows priority treatment of critical traffic on a “per-network, per-service, per-user basis.”
  • High availability. Ensures constant availability of critical communications, transparent restoration and “always on” service – even when the public switched telephony network (PSTN) or local power supply suffers outages.
  • Multipoint-to-multipoint communications. Provides integrated control and data collection across multiple sensors and regulators via synchronized, redundant control centers for disaster recovery.
  • Two-way communications. Supports increasingly sophisticated interactions between control centers and end-customers or field forces to enable new capabilities, such as customer sellback, return or credit allocation for locally stored power; improved field service dispatch; information sharing; and reporting.
  • Mobile services. Improves employee efficiency, both within company facilities and in the field.
  • Security. Protects the infrastructure from malicious and inadvertent compromise from both internal and external sources, ensures service reliability and continuity, and complies with critical security regulations such as North American Electric Reliability Corp. (NERC).
  • Legacy service integration. Accommodates the continued presence of legacy remote terminal units (RTUs), meters, sensors and regulators, supporting circuit, X.25, frame relay (FR), and asynchronous transfer mode (ATM) interfaces and communications.
  • Future-proofing. Capability and scalability to meet not just today’s applications, but tomorrow’s, as driven by regulatory requirements (such as smart metering) and new revenue opportunities, such as utility delivery of business and residential telecommunications (U-Telco) services.

SMARTGRIDNET EVOLUTION

A number of network technologies – both wire-line and wireless – work together to achieve these requirements in a SmartGridNet. Utilities must leverage a range of network integration disciplines to engineer a smooth transformation of their existing infrastructure to a SmartGridNet.

The remainder of this paper describes an evolutionary scenario, in which:

  • Next-generation synchronous optical network (SONET)-based multiservice provisioning platforms (MSPPs), with native QoS-enabled Ethernet capabilities are seamlessly introduced at the transport layer to switch traffic from both embedded sensors and next-generation IEDs.
  • Cost-effective wave division multiplexing (WDM) is used to increase communications network capacity for new traffic while leveraging embedded fiber assets.
  • Multiprotocol label switching (MPLS)/ IP routing infrastructure is introduced as an overlay on the transport layer only for traffic requiring higher-layer services that cannot be addressed more efficiently by the transport layer MSPPs.
  • Circuit emulation over IP virtual private networks (VPNs) is supported as a means for carrying sensor traffic over shared or leased network facilities.
  • A variety of communications applications are delivered over this integrated infrastructure to enhance operational efficiency, reliability, employee productivity and customer satisfaction.
  • A toolbox of access technologies is appropriately applied, per specific area characteristics and requirements, to extend power service monitoring and management all the way to the end-customer’s premises.
  • A smart home network offers new capabilities to the end-customer, such as Advanced Metering Infrastructure (AMI), appliance control and flexible billing models.
  • Managed and assured availability, security, performance and regulatory compliance of the communications network.

THE SMARTGRIDNET ARCHITECTURE

Figure 1 provides an architectural framework that we may use to illustrate and map the relevant communications technologies and protocols.

The backbone network in Figure 1 interconnects corporate sites and data centers, control centers, generation facilities, transmission and distribution substations, and other core facilities. It can isolate the distinct operational and corporate communications networks and subnetworks while enforcing the critical network requirements outlined in the section above.

The underlying transport network for this intelligent backbone is made up of both fiber and wireless (for example, microwave) technologies. The backbone also employs ring and mesh architectures to provide high availability and rapid restoration.

INTELLIGENT CORE TRANSPORT

As alluring as pure packet networks may be, synchronous SONET remains a key technology for operational backbones. Only SONET can support the range of new and legacy traffic types while meeting the stringent absolute delay, differential delay and 50-millisecond restoration requirements of real-time traffic.

SONET transport for legacy traffic may be provided in MSPPs, which interoperate with embedded SONET elements to implement ring and mesh protection over fiber facilities and time division multiplexing (TDM)-based microwave. Full-featured Ethernet switch modules in these MSPPs enable next-generation traffic via Ethernet over SONET (EOS) and/or packet over SONET (POS). Appropriate, cost-effective wave division multiplexing (WDM) solutions – for example, coarse, passive and dense WDM – may also be applied to guarantee sufficient capacity while leveraging existing fiber assets.

BACKBONE SWITCHING/ROUTING

From a switching and routing perspective, a significant amount of traffic in the backbone may be managed at the transport layer – for example, via QoS-enabled Ethernet switching capabilities embedded in the SONET-based MSPPs. This is a key capability for supporting expedited delivery of critical traffic types, enabling utilities to migrate to more generic object-oriented substation event (GOOSE)-based inter-substation communications for SCADA and teleprotection in the future in accordance with standards such as IEC 61850.

Where higher-layer services – for example, IP VPN, multicast, ATM and FR – are required, however, utilities can introduce a multi-service switching/routing infrastructure incrementally on top of the transport infrastructure. The switching infrastructure is based on multi-protocol label switching (MPLS), implementing Layer 2 transport encapsulation and/or IP VPNs, per the relevant Internet engineering task force (IETF) requests for comments (RFCs).

This type of unified infrastructure reduces operations costs by sharing switching and restoration capabilities across multiple services. Current IP/MPLS switching technology is consistent with the network requirements summarized above for service traffic requiring higher-layer services, and may be combined with additional advanced services such as Layer 3 VPNs and unified threat management (UTM) devices/firewalls for further protection and isolation of traffic.

CORE COMMUNICATIONS APPLICATIONS

Operational services such as tele-protection and SCADA represent key categories of applications driving the requirements for a robust, secure, cost-effective network as described. Beyond these, there are a number of communications applications enabling improved operational efficiency for the utility, as well as mechanisms to enhance employee productivity and customer service. These include, but are not limited to:

  • Active network controls. Improves capacity and utilization of the electricity network.
  • Voice over IP (VoIP). Leverages common network infrastructure to reduce the cost of operational and corporate voice communications – for example, eliminating costly channel banks for individual lines required at remote substations.
  • Closed circuit TV (CCTV)/Video Over IP. Improves surveillance of remote assets and secure automated facilities.
  • Multimedia collaboration. Combines voice, video and data traffic in a rich application suite to enhance communication and worker productivity, giving employees direct access to centralized expertise and online resources (for example, standards and diagrams).
  • IED interconnection. Better measures and manages the electricity networks.
  • Mobility. Leverages in-plant and field worker mobility – via cellular, land mobile radio (LMR) and WiFi – to improve efficiency of key work processes.
  • Contact center. Employs next-generation communications and best-in-class customer service business processes to improve customer satisfaction.

DISTRIBUTION AND ACCESS NETWORKS

The intelligent utility distribution and access networks are subtending networks from the backbone, accommodating traffic between backbone switches/applications and devices in the distribution infrastructure all the way to the customer premises. IEDs on customer premises include automated meters and device regulators to detect and manage customer power usage.

These new devices are primarily packet-based. They may, therefore, be best supported by packet-based access network technologies. However, for select rings, TDM may also be chosen, as warranted. The packet-based access network technology chosen depends on the specifics of the sites to be connected and the economics associated with that area (for example, right of way, customer densities and embedded infrastructure).

Regardless of the access and last-mile network designs, traffic ultimately arrives at the network via an IP/MPLS edge switch/router with connectivity to the backbone IP/MPLS infrastructure. This switching/routing infrastructure ensures connectivity among the intelligent edge devices, core capabilities and control applications.

THE SMART HOME NETWORK

A futuristic home can support many remotely controlled and managed appliances centered on lifestyle improvements of security, entertainment, health and comfort (see Figure 2). In such a home, applications like smart meters and appliance control could be provided by application service providers (ASPs) (such as smart meter operators or utilities), using a home service manager and appropriate service gateways. This architecture differentiates between the access provider – that is, the utility/U-Telco or other public carrier – and the multiple ASPs who may provide applications to a home via the access provider.

FLEXIBLE CHARGING

By employing smart meters and developing the ability to retrieve electricity usage data at regular intervals – potentially several readings per hour – retailers could make billing a significant competitive differentiator. detailed usage information has already enabled value-added billing in the telecommunications world, and AMI can do likewise for billing electricity services. In time, electricity users will come to expect the same degree of flexible charging with their electricity bill that they already experience with their telephone bills, including, for example, prepaid and post-paid options, tariff in function of time, automated billing for house rental (vacation), family or group tariffs, budget tariffs and messaging.

MANAGING THE COMMUNICATIONS NETWORK

For utilities to leverage the communications network described above to meet business key requirements, they must intelligently manage that network’s facilities and services. This includes:

  • Configuration management. Provisioning services to ensure that underlying switching/routing and transport requirements are met.
  • Fault and performance management. Monitoring, correlating and isolating fault and performance data so that proactive, preventative and reactive corrective actions can be initiated.
  • Maintenance management. Planning of maintenance activities, including material management and logistics, and geographic information management.
  • Restoration management. Creating trouble tickets, dispatching and managing the workforce, and carrying out associated tracking and reporting.
  • Security management. Assuring the security of the infrastructure, managing access to authorized users, responding to security events, and identifying and remediating vulnerabilities per key security requirements such as NERC.

Utilities can integrate these capabilities into their existing network management infrastructures, or they can fully or partially outsource them to managed network service providers.

Figure 3 shows how key technologies are mapped to the architectural framework described previously. Being able to evolve into an intelligent utilities network in a cost-effective manner requires trusted support throughout planning, design, deployment, operations and maintenance.

CONCLUSION

Utilities can evolve their existing infrastructures to meet key SmartGridnet requirements by effectively leveraging a range of technologies and approaches. Through careful planning, designing, engineering and application of this technology, such firms may achieve the business objectives of SmartGridnet while protecting their current investments in infrastructure. Ultimately, by taking an evolutionary approach to SmartGridnet, utilities can maximize their ability to leverage the existing asset base as well as minimize capital and operations expenses.

Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.

Using Analytics for Better Mobile Technology Decisions

Mobile computing capabilities have been proven to drive business value by providing traveling executives, field workers and customer service personnel with real-time access to customer data. Better and more timely access to information shortens response times, improves accuracy and makes the workforce more productive.

However, although your organization may agree that technology can improve business processes, different stakeholders – IT management, financial and business leadership and operations personnel – often have different perspectives on the real costs and value of mobility. For example, operations wants tools that help employees work faster and focus more intently on the customer; finance wants the solution that costs the least amount this quarter; and IT wants to implement mobile projects that can succeed without draining resources from other initiatives.

It may not be obvious, but there are ways to achieve everyone’s goals. Analytics can help operations, finance and IT find common ground. When teams understand the data, they can understand the logic. And when they understand the logic they can support making the right decision.

EXPOSING THE FORMULA

Deploying mobile technology is a strategic initiative with far-reaching consequences for the health of an enterprise. In the midst of evaluating a mobile project, however, it’s easy to forget that the real goal of hardware-acquisition initiatives is to make the workforce more productive and improve both the top and bottom lines over the long term.

Most decision-analytics tools focus on up-front procurement questions alone, because the numbers seem straightforward and uncomplicated. But these analyses miss the point. The best analysis is one that can determine which of the solutions will provide the most advantages to the workforce at the lowest possible overall cost to the organization.

To achieve the best return on investment we must do more than recoup an out-of-pocket expense: Are customers better served? Are employees working better, faster, smarter? Though hard to quantify, these are the fundamental aspects that determine the return on investment (ROI) of technology.

It’s possible to build a vendor-neutral analysis to calculate the total cost of ownership (TCO) and ROI of mobile computers. Panasonic Computer Solutions Company, the manufacturer of Toughbook notebooks, enlisted the services of my analytics company, Serious Networks, Inc., to develop an unbiased TCO/ROI application to help companies make better decisions when purchasing mobile computers.

The Panasonic-sponsored operational analysis tool provides statistically valid answers by performing a simulation of the devices as they would be used and managed in the field, generating a model that compares the costs and benefits of multiple manufacturers’ laptops. Purchase cost, projected downtime, the range of wireless options, notebook features, support and other related costs are all incorporated into this analytic toolset.

Using over 100 unique simulations with actual customers, four key TCO/ROI questions emerged:

  • What will it cost to buy a proposed notebook solution?
  • What will it cost to own it over the life of the project?
  • What will it cost to deploy and decommission the units?
  • What value will be created for the organization?

MOVING BEYOND GUESSTIMATES – CONSIDERING COSTS AND VALUE OVER A LIFETIME

There is no such thing as an average company, so an honest analysis uses actual corporate data instead of industry averages. Just because a device is the right choice for one company does not make it the right choice for yours.

An effective simulation takes into account the cost of each competing device, the number of units and the rate of deployment. It calculates the cost of maintaining a solution and establishes the value of productive time using real loaded labor rates or revenue hours. It considers buy versus lease questions and can extrapolate how features will be used in the field.

As real-world data is entered, the software determines which mobile computing solution is most likely to help the company reach its goals. Managers can perform what-if analyses by adjusting assumptions and re-running the simulation. Within this framework, managers will build a business case that forecasts the costs of each mobile device against the benefits derived over time (see Figures 1 and 2).

MAKING INTANGIBLES TANGIBLE

The 90-minute analysis process is very granular. It’s based on the industry segment – because it simulates the tasks of the workforce – and compares up to 10 competing devices.

Once devices are selected, purchase or lease prices are entered, followed by value-added benefits like no-fault warranties and on-site support. Intangible factors favoring one vendor over another, such as incumbency, are added to the data set. The size and rate of the deployment, as well as details that determine the cost of preparing the units for the workforce, are also considered.

Next the analysis accounts for the likelihood and cost of failure, using your own experience as a baseline. Somewhat surprisingly, the impact of failure is given less weight than most outside observers would expect. Reliability is important, but it’s not the only or most important attribute.

What is given more weight are productivity and operational enhancements, which can have a significantly greater financial impact than reliability, because statistically employees will spend much more of their time working than dealing with equipment malfunctions.

A matrix of features and key workforce behaviors is developed to examine the relative importance of touch screens, wireless and GPS, as well as each computer vendor’s ability to provide those features as standard or extra-cost equipment. The features are rated for their time and motion impact on your organization, and an operations efficiency score is applied to imitate real-world results.

During the session, the workforce is described in detail, because this information directly affects the cost and benefit. To assess the value of a telephone lineman’s time, for example, the system must know the average number of daily service orders, the percentage of those service calls that require re-work and whether linemen are normally in the field five, six or seven days a week.

Once the data is collected and input it can be modified to provide instantaneous what-if, heads-up and break-even analyses reports – without interference from the vendor. The model is built in Microsoft Excel so that anyone can assess the credibility of the analysis and determine independently that there are no hidden calculations or unfair formulas skewing the results.

CONCLUSION

The Panasonic simulation tool can help different organizations within a company come to consensus before making a buying decision. Analytics help clarify whether a purpose-built rugged or business-rugged system or some other commercial notebook solution is really the right choice for minimizing the TCO and maximizing the ROI of workforce mobility.

ABOUT THE AUTHOR

Jason Buk is an operations director at Serious Networks, Inc., a Denver-based business analytics firm. Serious Networks uses honest forecasting and rigorous analysis to determine what resources are most likely to increase the effectiveness of the workforce, meet corporate goals and manage risk in the future.

Is Your Mobile Workforce Truly Optimized?

ClickSoftware is the leading provider of mobile workforce management and service optimization solutions that create business value for service operations through higher levels of productivity, customer satisfaction and cost effectiveness. Combining educational, implementation and support services with best practices and its industry leading solutions, ClickSoftware drives service decision making across all levels of the organization.

Our mobile workforce management solution helps utilities empower mobile workers with accurate, real-time information for optimum service and quick on-site decision making. From proactive customer demand forecasting and capacity planning to real-time decision-making, incorporating scheduling, mobility and location-based services, ClickSoftware helps service organizations get the most out of their resources.

The IBM-ClickSoftware alliance provides the most comprehensive offering for Mobile Workforce and Asset Management powering the real-time service enterprise. Customers can benefit from maximized workforce productivity and customer satisfaction while controlling, and then minimizing, operational costs.

ClickSoftware provides a flexible, scalable and proven solution that has been deployed at many utility companies around the world. Highlights include the ability to:

  • Automatically update the schedule based on real-time information from the field;
  • Manage crews (parts and people);
  • Cover a wide variety of job types within one product – from short jobs requiring one person to multistage jobs needing a multi-person team over several days or weeks;
  • Balance regulatory, environmental and union compliance;
  • Continuously strive to raise the bar in operational excellence;
  • Incorporate street-level routing into the decision-making process; and
  • Plan for the catastrophic events and seasonal variability in field service operations.

The resulting value proposition to the customer is extremely compelling:

  • Typically, optimized scheduling and routing of the mobile workforce generates a 31 percent increase in jobs per day versus the industry average (Source: AFSMI survey 2003).
  • A variety of solutions, ranging from entry level to advanced, directly address the broad spectrum of pains experienced by service organizations around the world, including optimized scheduling, routing, mobile communications and integration of solutions components – within the service optimization solution itself and also into the CRM/ERP/EAM back end.
  • An entry level offering with a staged upgrade path toward a fully automated service optimization solution ensures that risk is managed and the most challenging of customer requirements may be met. This "least risk" approach for the customer is delivered by a comprehensive set of IBM business consulting, installation and support services.
  • The industry-proven credibility of ClickSoftware’s ServiceOptimization Suite, combined with IBM’s wireless middleware, software, hardware and business consulting services, provides the customer with the most effective platform for managing field service operations.

ClickSoftware’s customers represent a cross section of leaders in the utilities, telecommunications, computer and office equipment, home services, and capital equipment industries. Close to 100 customers around the world have employed ClickSoftware service optimization solutions and services to achieve optimal levels of field service.

To find out more visit www.clicksoftware.com or call 888.438.3308.

Achieving Decentralized Coordination In the Electric Power Industry

For the past century, the dominant business and regulatory paradigms in the electric power industry have been centralized economic and physical control. The ideas presented here and in my forthcoming book, Deregulation, Innovation, and Market Liberalization: Electricity Restructuring in a Constantly Evolving Environment (Routledge, 2008), comprise a different paradigm – decentralized economic and physical coordination – which will be achieved through contracts, transactions, price signals and integrated intertemporal wholesale and retail markets. Digital communication technologies – which are becoming ever more pervasive and affordable – are what make this decentralized coordination possible. In contrast to the “distributed control” concept often invoked by power systems engineers (in which distributed technology is used to enhance centralized control of a system), “decentralized coordination” represents a paradigm in which distributed agents themselves control part of the system, and in aggregate, their actions produce order: emergent order. [1]

Dynamic retail pricing, retail product differentiation and complementary end-use technologies provide the foundation for achieving decentralized coordination in the electric power industry. They bring timely information to consumers and enable them to participate in retail market processes; they also enable retailers to discover and satisfy the heterogeneous preferences of consumers, all of whom have private knowledge that’s unavailable to firms and regulators in the absence of such market processes. Institutions that facilitate this discovery through dynamic pricing and technology are crucial for achieving decentralized coordination. Thus, retail restructuring that allows dynamic pricing and product differentiation, doesn’t stifle the adoption of digital technology and reduces retail entry barriers is necessary if this value-creating decentralized coordination is to happen.

This paper presents a case study – the “GridWise Olympic Peninsula Testbed Demonstration Project” – that illustrates how digital end-use technology and dynamic pricing combine to provide value to residential customers while increasing network reliability and reducing required infrastructure investments through decentralized coordination. The availability (and increasing cost-effectiveness) of digital technologies enabling consumers to monitor and control their energy use and to see transparent price signals has made existing retail rate regulation obsolete. Instead, the policy recommendation that this analysis implies is that regulators should reduce entry barriers in retail markets and allow for dynamic pricing and product differentiation, which are the keys to achieving decentralized coordination.

THE KEYS: DYNAMIC PRICING, DIGITAL TECHNOLOGY

Dynamic pricing provides price signals that reflect variations in the actual costs and benefits of providing electricity at different times of the day. Some of the more sophisticated forms of dynamic pricing harness the dramatic improvements in information technology of the past 20 years to communicate these price signals to consumers. These same technological developments also give consumers a tool for managing their energy use, in either manual or automated form. Currently, with almost all U.S. consumers (even industrial and commercial ones) paying average prices, there’s little incentive for consumers to manage their consumption and shift it away from peak hours. This inelastic demand leads to more capital investment in power plants and transmission and distribution facilities than would occur if consumers could make choices based on their preferences and in the face of dynamic pricing.

Retail price regulation stifles the economic processes that lead to both static and dynamic efficiency. Keeping retail prices fixed truncates the information flow between wholesale and retail markets, and leads to inefficiency, price spikes and price volatility. Fixed retail rates for electric power service mean that the prices individual consumers pay bear little or no relation to the marginal cost of providing power in any given hour. Moreover, because retail prices don’t fluctuate, consumers are given no incentive to change their consumption as the marginal cost of producing electricity changes. This severing of incentives leads to inefficient energy consumption in the short run and also causes inappropriate investment in generation, transmission and distribution capacity in the long run. It has also stifled the implementation of technologies that enable customers to make active consumption decisions, even though communication technologies have become ubiquitous, affordable and user-friendly.

Dynamic pricing can include time-of-use (TOU) rates, which are different prices in blocks over a day (based on expected wholesale prices), or real-time pricing (RTP) in which actual market prices are transmitted to consumers, generally in increments of an hour or less. A TOU rate typically applies predetermined prices to specific time periods by day and by season. RTP differs from TOU mainly because RTP exposes consumers to unexpected variations (positive and negative) due to demand conditions, weather and other factors. In a sense, fixed retail rates and RTP are the end points of a continuum of how much price variability the consumer sees, and different types of TOU systems are points on that continuum. Thus, RTP is but one example of dynamic pricing. Both RTP and TOU provide better price signals to customers than current regulated average prices do. They also enable companies to sell, and customers to purchase, electric power service as a differentiated product.

TECHNOLOGY’S ROLE IN RETAIL CHOICE

Digital technologies are becoming increasingly available to reduce the cost of sending prices to people and their devices. The 2007 Galvin Electricity Initiative report “The Path to Perfect Power: New Technologies Advance Consumer Control” catalogs a variety of end-user technologies (from price-responsive appliances to wireless home automation systems) that can communicate electricity price signals to consumers, retain data on their consumption and be programmed to respond automatically to trigger prices that the consumer chooses based on his or her preferences. [2] Moreover, the two-way communication advanced metering infrastructure (AMI) that enables a retailer and consumer to have that data transparency is also proliferating (albeit slowly) and declining in price.

Dynamic pricing and the digital technology that enables communication of price information are symbiotic. Dynamic pricing in the absence of enabling technology is meaningless. Likewise, technology without economic signals to respond to is extremely limited in its ability to coordinate buyers and sellers in a way that optimizes network quality and resource use. [3] The combination of dynamic pricing and enabling technology changes the value proposition for the consumer from “I flip the switch, and the light comes on” to a more diverse and consumer-focused set of value-added services.

These diverse value-added services empower consumers and enable them to control their electricity choices with more granularity and precision than the environment in which they think solely of the total amount of electricity they consume. Digital metering and end-user devices also decrease transaction costs between buyers and sellers, lowering barriers to exchange and to the formation of particular markets and products.

Whether they take the form of building control systems that enable the consumer to see the amount of power used by each function performed in a building or appliances that can be programmed to behave differently based on changes in the retail price of electricity, these products and services provide customers with an opportunity to make better choices with more precision than ever before. In aggregate, these choices lead to better capacity utilization and better fuel resource utilization, and provide incentives for innovation to meet customers’ needs and capture their imaginations. In this sense, technological innovation and dynamic retail electricity pricing are at the heart of decentralized coordination in the electric power network.

EVIDENCE

Led by the Pacific Northwest National Laboratory (PNNL), the Olympic Peninsula GridWise Testbed Project served as a demonstration project to test a residential network with highly distributed intelligence and market-based dynamic pricing. [4] Washington’s Olympic Peninsula is an area of great scenic beauty, with population centers concentrated on the northern edge. The peninsula’s electricity distribution network is connected to the rest of the network through a single distribution substation. While the peninsula is experiencing economic growth and associated growth in electricity demand, the natural beauty of the area and other environmental concerns served as an impetus for area residents to explore options beyond simply building generation capacity on the peninsula or adding transmission capacity.

Thus, this project tested how the combination of enabling technologies and market-based dynamic pricing affected utilization of existing capacity, deferral of capital investment and the ability of distributed demand-side and supply-side resources to create system reliability. Two questions were of primary interest:

1) What dynamic pricing contracts do consumers find attractive, and how does enabling technology affect that choice?

2) To what extent will consumers choose to automate energy use decisions?

The project – which ran from April 2006 through March 2007 – included 130 broadband-enabled households with electric heating. Each household received a programmable communicating thermostat (PCT) with a visual user interface that allowed the consumer to program the thermostat for the home – specifically to respond to price signals, if desired. Households also received water heaters equipped with a GridFriendly appliance (GFA) controller chip developed at PNNL that enables the water heater to receive price signals and be programmed to respond automatically to those price signals. Consumers could control the sensitivity of the water heater through the PCT settings.

These households also participated in a market field experiment involving dynamic pricing. While they continued to purchase energy from their local utility at a fixed, discounted price, they also received a cash account with a predetermined balance, which was replenished quarterly. The energy use decisions they made would determine their overall bill, which was deducted from their cash account, and they were able to keep any difference as profit. The worst a household could do was a zero balance, so they were no worse off than if they had not participated in the experiment. At any time customers could log in to a secure website to see their current balances and determine the effectiveness of their energy use strategies.

On signing up for the project, the households received extensive information and education about the technologies available to them and the kinds of energy use strategies facilitated by these technologies. They were then asked to choose a retail pricing contract from three options: a fixed price contract (with an embedded price risk premium), a TOU contract with a variable critical peak price (CPP) component that could be called in periods of tight capacity or an RTP contract that would reflect a wholesale market-clearing price in five-minute intervals. The RTP was determined using a uniform price double auction in which buyers (households and commercial) submit bids and sellers submit offers simultaneously. This project represented the first instance in which a double auction retail market design was tested in electric power.

The households ranked the contracts and were then divided fairly evenly among the three types, along with a control group that received the enabling technologies and had their energy use monitored but did not participate in the dynamic pricing market experiment. All households received either their first or second choice; interestingly, more than two-thirds of the households ranked RTP as their first choice. This result counters the received wisdom that residential customers want only reliable service at low, stable prices.

According to the 2007 report on the project by D.J. Hammerstrom (and others), on average participants saved 10 percent on their electricity bills. [5] That report also includes the following findings about the project:

Result 1. For the RTP group, peak consumption decreased by 15 to 17 percent relative to what the peak would have been in the absence of the dynamic pricing – even though their overall energy consumption increased by approximately 4 percent. This flattening of the load duration curve indicates shifting some peak demand to nonpeak hours. Such shifting increases the system’s load factor, improving capacity utilization and reducing the need to invest in additional capacity, for a given level of demand. A 15 to 17 percent reduction is substantial and is similar in magnitude to the reductions seen in other dynamic pricing pilots.

After controlling for price response, weather effects and weekend days, the RTP group’s overall energy consumption was 4 percent higher than that of the fixed price group. This result, in combination with the load duration effect noted above, indicates that the overall effect of RTP dynamic pricing is to smooth consumption over time, not decrease it.

Result 2. The TOU group achieved both a large price elasticity of demand (-0.17), based on hourly data, and an overall energy reduction of approximately 20 percent relative to the fixed price group.

After controlling for price response, weather effects and weekend days, the TOU group’s overall energy consumption was 20 percent lower than that of the fixed price group. This result indicates that the TOU (with occasional critical peaks) pricing induced overall conservation – a result consistent with the results of the California SPP project. The estimated price elasticity of demand in the TOU group was -0.17, which is high relative to that observed in other projects. This elasticity suggests that the pricing coupled with the enabling end-use technology amplifies the price responsiveness of even small residential consumers.

Despite these results, dynamic pricing and enabling technologies are proliferating slowly in the electricity industry. Proliferation requires a combination of formal and informal institutional change to overcome a variety of barriers. And while formal institutional change (primarily in the form of federal legislation) is reducing some of these barriers, it remains an incremental process. The traditional rate structure, fixed by state regulation and slow to change, presents a substantial barrier. Predetermined load profiles inhibit market-based pricing by ignoring individual customer variation and the information that customers can communicate through choices in response to price signals. Furthermore, the persistence of standard offer service at a discounted rate (that is, a rate that does not reflect the financial cost of insurance against price risk) stifles any incentive customers might have to pursue other pricing options.

The most significant – yet also most intangible and difficult-to-overcome – obstacle to dynamic pricing and enabling technologies is inertia. All of the primary stakeholders in the industry – utilities, regulators and customers – harbor status quo bias. Incumbent utilities face incentives to maintain the regulated status quo as much as possible (given the economic, technological and demographic changes surrounding them) – and thus far, they’ve been successful in using the political process to achieve this objective.

Customer inertia also runs deep because consumers have not had to think about their consumption of electricity or the price they pay for it – a bias consumer advocates generally reinforce by arguing that low, stable prices for highly reliable power are an entitlement. Regulators and customers value the stability and predictability that have arisen from this vertically integrated, historically supply-oriented and reliability-focused environment; however, what is unseen and unaccounted for is the opportunity cost of such predictability – the foregone value creation in innovative services, empowerment of customers to manage their own energy use and use of double-sided markets to enhance market efficiency and network reliability. Compare this unseen potential with the value creation in telecommunications, where even young adults can understand and adapt to cell phone-pricing plans and benefit from the stream of innovations in the industry.

CONCLUSION

The potential for a highly distributed, decentralized network of devices automated to respond to price signals creates new policy and research questions. Do individuals automate sending prices to devices? If so, do they adjust settings, and how? Does the combination of price effects and innovation increase total surplus, including consumer surplus? In aggregate, do these distributed actions create emergent order in the form of system reliability?

Answering these questions requires thinking about the diffuse and private nature of the knowledge embedded in the network, and the extent to which such a network becomes a complex adaptive system. Technology helps determine whether decentralized coordination and emergent order are possible; the dramatic transformation of digital technology in the past few decades has decreased transaction costs and increased the extent of feasible decentralized coordination in this industry. Institutions – which structure and shape the contexts in which such processes occur – provide a means for creating this coordination. And finally, regulatory institutions affect whether or not this coordination can occur.

For this reason, effective regulation should focus not on allocation but rather on decentralized coordination and how to bring it about. This in turn means a focus on market processes, which are adaptive institutions that evolve along with technological change. Regulatory institutions should also be adaptive, and policymakers should view regulatory policy as work in progress so that the institutions can adapt to unknown and changing conditions and enable decentralized coordination.

ENDNOTES

1. Order can take many forms in a complex system like electricity – for example, keeping the lights on (short-term reliability), achieving economic efficiency, optimizing transmission congestion, longer-term resource adequacy and so on.

2. Roger W. Gale, Jean-Louis Poirier, Lynne Kiesling and David Bodde, “The Path to Perfect Power: New Technologies Advance Consumer Control,” Galvin Electricity Initiative report (2007). www.galvinpower.org/resources/galvin.php?id=88

3. The exception to this claim is the TOU contract, where the rate structure is known in advance. However, even on such a simple dynamic pricing contract, devices that allow customers to see their consumption and expenditure in real time instead of waiting for their bill can change behavior.

4. D.J. Hammerstrom et. al, “Pacific Northwest GridWise Testbed Demonstration Projects, volume I: The Olympic Peninsula Project” (2007). http://gridwise.pnl.gov/docs/op_project_final_report_pnnl17167.pdf

5. Ibid.