Alcatel-Lucent Your Smart Grid Partner

Alcatel-Lucent offers comprehensive capabilities that combine Utility industry – specific knowledge and experience with carrier – grade communications technology and expertise. Our IP/MPLS Transformation capabilities and Utility market – specific knowledge are the foundation of turnkey solutions designed to enable Smart Grid and Smart Metering initiatives. In addition, Alcatel-Lucent has specifically developed Smart Grid and Smart Metering applications and solutions that:

  • Improve the availability, reliability and resiliency of critical voice and data communications even during outages
  • Enable optimal use of network and grid devices by setting priorities for communications traffic according to business requirements
  • Meet NERC CIP compliance and cybersecurity requirements
  • Improve the physical security and access control mechanism for substations, generation facilities and other critical sites
  • Offer a flexible and scalable network to grow with the demands and bandwidth requirements of new network service applications
  • Provide secure web access for customers to view account, electricity usage and billing information
  • Improve customer service and experience by integrating billing and account information with IP-based, multi-channel client service platforms
  • Reduce carbon emissions and increase efficiency by lowering communications infrastructure power consumption by as much as 58 percent

Working with Alcatel-Lucent enables Energy and Utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for, and minimizing the risks associated with, moving to Smart Grid solutions. And Alcatel-Lucent helps Energy and Utility companies achieve compliance with regulatory requirements and reductions in operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services. We build Smart Networks to support the Smart Grid.

American Recovery and Reinvestment Act of 2009 Support from Alcatel-Lucent

The American Recovery and Reinvestment Act (ARRA) of 2009 was adopted by Congress in February 2009 and allocates $4.5 billion to the Department of Energy (DoE) for Smart Grid deployment initiatives. As a result of the ARRA, the DoE has established a process for awarding the $4.5 billion via investment grants for Smart Grid Research and Development, and Deployment projects. Alcatel-Lucent is uniquely qualified to help utilities take advantage of the ARRA Smart Grid funding. In addition to world-class technology and Smart Grid and Smart Metering solutions, Alcatel-Lucent offers turnkey assistance in the preparation of grant applications, and subsequent follow-up and advocacy with federal agencies. Partnership with Alcatel-Lucent on ARRA includes:

  • Design Implementation and support for a Smart Grid Network
  • Identification of all standardized and unique elements of each grant program
  • Preparation and Compilation of all required grant application components, such as project narratives, budget formation, market surveys, mapping, and all other documentation required for completion
  • Advocacy at federal, state, and local government levels to firmly establish the value proposition of a proposal and advance it through the entire process to ensure the maximum opportunity for success

Alcatel-Lucent is a Recognized Leader in the Energy and Utilities Market

Alcatel-Lucent is an active and involved leader in the Energy and Utility market, with active membership and leadership roles in key Utility industry associations, including the Utility Telecom Council (UTC), the American Public Power Association (APPA), and Gridwise. Gridwise is an association of Utilities, industry research organizations (e.g., EPRI, Pacific Northwest National Labs, etc.), and Utility vendors, working in cooperation with DOE to promote Smart Grid policy, regulatory issues, and technologies (see www.gridwise.org for more info). Alcatel-Lucent is also represented on the Board of Directors for UTC’s Smart Network Council, which was established in 2008 to promote and develop Smart Grid policies, guidelines, and recommended technologies and strategies for Smart Grid solution implementation.

Alcatel-Lucent IP MPLS Solution for the Next Generation Utility Network

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain flawless service delivery. The Alcatel-Lucent IP/MPLS solution can enable the utility operator to extend and enhance its network with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both CAPEX and OPEX without jeopardizing reliability. Advanced technologies also allow the introduction of new Smart Grid applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation utility networks.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. As a leader in utility, enterprise and carrier IP technologies, fixed, mobile and converged broadband access, applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry, and Bell Labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

Online Transient Stability Controls

For the last few decades the growth of the world’s population and its corresponding increased demand for electrical energy has created a huge increase in the supply of electrical power. However, for logistical, environmental, political and social reasons, this power generation is rarely near its consumers, necessitating the growth of very large and complex transmission networks. The addition of variable wind energy in remote locations is only exacerbating the situation. In addition the transmission grid capacity has not kept pace with either generation capacity or consumption while at the same time being extremely vulnerable to potential large-scale outages due to outdated operational capabilities.

For example, today if a fault is detected in the transmission system, the only course is to shed both load and generation. This is often done without consideration for real-time consequences or alternative analysis. If not done rapidly, it can result in a widespread, cascading power system blackout. While it is necessary to remove factors that might lead to a large-scale blackout, restriction of power flow or other countermeasures against such a failure, may only achieve this by sacrificing economical operation. Thus, the flexible and economical operation of an electric power system may often be in conflict with the requirement for improved supply reliability and system stability.

Limits of Off-line Approaches

One approach to solving this problem involves stabilization systems that have been deployed for preventing generator step-out by controlling the generator acceleration through power shedding, in which some of the generators are shut off at the time of a power system fault.

In 1975, an off-line special protection system (SPS) for power flow monitoring was introduced to achieve the transient stability of the trunk power system and power source system after a network expansion in Japan. This system was initially of the type for which settings were determined in advance by manual calculations using transient stability simulation programs assuming many contingencies on typical power flow patterns.

This type of off-line solution has the following problems:

  • Planning, design, programming, implementation and operational tasks are laborious. A vast number of simulations are required to determine the setting tables and required countermeasures, such as generator shedding, whenever transmission lines are constructed;
  • It is not well suited to variable generations sources such as wind or photovoltaic farms;
  • It is not suitable for reuse and replication, incurring high maintenance costs; and
  • Excessive travel time and related labor expense is required for the engineer and field staff to maintain the units at numerous sites.

By contrast, an online TSC solution employs various sensors that are placed throughout the transmission network, substations and generation sources. These sensors are connected to regional computer systems via high speed communications to monitor, detect and execute contingencies on transients that may affect system stability. These systems in turn are connected to centralized computers which monitor the network of distributed computers, building and distributing contingencies based on historical and recent information. If a transient event occurs, the entire ecosystem responds within 150 ms to detect, analyze, determine the correct course of action, and execute the appropriate set of contingencies in order to preserve the stability of the power network.

In recent years, high performance computational servers have been developed and their costs have been reduced enough to use many of them in parallel and/or in a distributed computing architecture. This results in a system that not only provides a benefit in greatly increasing the availability and reliability of the power system, but in fact, can best optimize the throughput of the grid. Thus not only has system reliability improved or remained stable, but the network efficiency itself has increased without a significant investment in new transmission lines. This has resulted in more throughput within the transmission grid, without building new transmission lines.

Solution and Elements

In 1995, for the first time ever, an online TSC system was developed and introduced in Japan. This solution provided a system stabilization procedure required by the construction of the new 500kV trunk networks of Chubu Electric Power Co. (CEPCO) [1-4]. Figure 1 shows the configuration of the online TSC system. This system introduced a pre-processing online calculation in the TSC-P (parent) besides a fast, post-event control executed by the combination of TSC-C (child) and TSC-T (terminal). This online TSC system can be considered an example of a self-healing solution of a smart grid. As a result of periodic simulations using the online data in TSC-P, operators of energy management systems/supervisory control and data acquisition (EMS/ SCADA) are constantly made aware of stability margins for current power system situations.

Using the same online data, periodic calculations performed in the TSC-P can reflect power network situations and the proper countermeasures to mitigate transient system events. The TSC-P simulates transient stability dynamics on about 100 contingencies of the power systems for 500 kV, 275 kV and 154 kV transmission networks. The setting tables for required countermeasures, such as generator shedding, are periodically sent to the TSC-Cs located at main substations. The TSC-Ts located at generation stations, shed the generators when the actual fault occurs. The actual generator shedding by the combination of TSC-Cs and TSC-Ts is completed within 150 ms after the fault to maintain the system’s stability.

Customer Experiences and Benefits

Figure 2 shows the locations of online TSC systems and their coverage areas in CEPCO’s power network. There are two online TSC systems currently operating; namely, the trunk power TSC system, to protect the 500 kV trunk power system introduced in 1995, and the power source TSC system to protect the 154 kV to 275 kV power source systems around the generation stations.

Actual performance data have shown some significant benefits:

  • Total transfer capability (TTC) is improved through elimination of transient stability limitations. TTC is decided by the minimum value of limitations given by not only thermal limit of transmission lines but transient stability, frequency stability, and voltage stability. Transient stability limits often determines the TTC in the case of long transmission lines from generation plants. CEPCO was able to introduce high-efficiency, combined-cycle power plants without constructing new transmission lines. TTC was increased from 1,500 MW to 3,500 MW by introducing the on-line TSC solution.
  • Power shedding is optimized. Not only is the power flow of the transmission line on which a fault occurs assessed, but the effects of other power flows surrounding the fault point are included in the analysis to decide the precise stability limit. The online TSC system can also reflect the constraints and priorities of each generator to be shed. To ensure a smooth restoration after the fault, restart time of shut off generators, for instance, can also be included.
  • When constructing new transmission lines, numerous off-line studies assuming various power flow patterns are required to support off-line SPS. After introduction of the online TSC system, new construction of transmission lines was more efficient by changing the equipment database for the simulation in the TSC-P.

In 2003, this CEPCO system received the 44th Annual Edison Award from the Edison Electric Institute (EEI), recognizing CEPCO’s achievement with the world’s first application of this type of system, and the contribution of the system to efficient power management.

Today, benefits continue to accrue. A new TSC-P, which adopts the latest high-performance computation servers, is now under construction for operation in 2009 [3]. The new system will shorten the calculation interval from every five minutes to every 30 seconds in order to reflect power system situations as precisely as possible. This interval was determined by the analysis of various stability situations recorded by the current TSC-P over more than 10 years of operation.

Additionally, although the current TSC-P uses the same online data as used by EMS/ SCADA, it can control emergency actions against small signal instability by receiving phasor measurement unit (PMU) data to detect divergences of phasor angles and voltages among the main substations.

Summary

The online TSC system is expected to realize optimum stabilization control of recent complicated power system conditions by obtaining power system information online and carrying out stability calculations at specific intervals. The online TSC will thus help utilities achieve better returns on investment in new or renovated transmission lines, reducing outage time and enabling a more efficient smart grid.

References

  1. Ota, Kitayama, Ito, Fukushima, Omata, Morita and Y. Kokai, “Development of Transient Stability Control System (TSC System) Based on Online Stability Calculation”, IEEE Trans. on Power System, Vol. 11, No. 3, pp. 1463-1472, August 1996.
  2. Koaizawa, Nakane, Omata and Y. Kokai, “Acutual Operating Experience of Online Transient Stability Control System (TSC System), IEEE PES Winter Meeting, 2000, Vol. 1, pp 84-89.
  3. Takeuchi, Niwa, Nakane and T. Miura
    “Performance Evaluation of the Online Transient Stability Control System (Online TSC System)”, IEEE PES General Meeting , June 2006.
  4. Takeuchi, Sato, Nishiiri, Kajihara, Kokai and M. Yatsu, “Development of New Technologies and Functions for the Online TSC System”, IEEE PES General Meeting , June 2006.

Successful Smart Grid Architecture

The smart grid is progressing well on several fronts. Groups such as the Grid Wise Alliance, events such as Grid Week, and national policy citations such as the American Recovery and Reinvestment Act in the U.S., for example, have all brought more positive attention to this opportunity. The boom in distributed renewable energy and its demands for a bidirectional grid are driving the need forward, as are sentiments for improving consumer control and awareness, giving customers the ability to engage in real-time energy conservation.

On the technology front, advances in wireless and other data communications make wide-area sensor networks more feasible. Distributed computation is certainly more powerful – just consider your iPod! Even architectural issues such as interoperability are now being addressed in their own forums such as Grid Inter-Op. It seems that the recipe for a smart grid is coming together in a way that many who envisioned it would be proud. But to avoid making a gooey mess in the oven, an overall architecture that carefully considers seven key ingredients for success must first exist.

Sources of Data

Utilities have eons of operational data: both real time and archival, both static (such as nodal diagrams within distribution management systems) and dynamic (such as switching orders). There is a wealth of information generated by field crews, and from root-cause analyses of past system failures. Advanced metering infrastructure (AMI) implementations become a fine-grained distribution sensor network feeding communication aggregation systems such as Silver Springs Network’s Utility IQ or Trilliant’s Secure Mesh Network.

These data sources need to be architected to be available to enhance, support and provide context for real-time data coming in from new intelligent electronic devices (IEDs) and other smart grid devices. In an era of renewable energy sources, grid connection controllers become yet another data source. With renewables, micro-scale weather forecasting such as IBM Research’s Deep Thunder can provide valuable context for grid operation.

Data Models

Once data is obtained, in order to preserve its value in a standard format, one can think in terms of an extensible markup language (XML)-oriented database. Modern implementations of these databases have improved performance characteristics, and the International Engineering Consortium (IEC) common information/ generic interface definition (CIM/GID) model, though oriented more to assets than operations, is a front-running candidate for consideration.

Newer entries, such as device language message specification – coincidence-ordered subsets expectation maximization (DLMS-COSEM) for AMI, are also coming into practice. Sometimes, more important than the technical implementation of the data, however, is the model that is employed. A well-designed data model not only makes exchange of data and legacy program adjustments easier, but it can also help the applicability of security and performance requirements. The existence of data models is often a good indicator of an intact governance process, for it facilitates use of the data by multiple applications.

Communications

Customer workshops and blueprinting sessions have shown that one of the most common issues needing to be addressed is the design of the wide-area communication system. Data communications architecture affects data rate performance, the cost of distributed intelligence and the identification of security susceptibilities.

There is no single communications technology that is suitable for all utilities, or even for all operational areas across any individual utility. Rural areas may be served by broadband over powerline (BPL), while urban areas benefit from multi-protocol label switching (MPLS) and purpose- designed mesh networks, enhanced by their proximity to fiber.

In the future, there could be entirely new choices in communications. So, the smart grid architect needs to focus on security, standardized interfaces to accept new technology, enablement of remote configuration of devices to minimize any touching of smart grid devices once installed, and future-proofing the protocols.

The architecture should also be traceable to the business case. This needs to include probable use cases that may not be in the PUC filing, such as AMI now, but smart grid later. Few utilities will be pleased with the idea of a communication network rebuild within five years of deploying an AMI-only network.

Communications architecture must also consider power outages, so battery backup, solar recharging, or other equipment may be required. Even arcane details such as “Will the antenna on a wireless device be the first thing to blow off in a hurricane?” need to be considered.

Security

Certainly, the smart grid’s purpose is to enhance network reliability, not lower its security. But with the advent of North American Reliability Corp. Critical Infrastructure Protection (NERC-CIP), security has risen to become a prime consideration, usually addressed in phase one of the smart grid architecture.

Unlike the data center, field-deployed security has many new situations and challenges. There is security at the substation – for example, who can access what networks, and when, within the control center. At the other end, security of the meter data in a proprietary AMI system needs to be addressed so that only authorized applications and personnel can access the data.

Service oriented architecture (SOA) appliances are network devices to enable integration and help provide security at the Web services message level. These typically include an integration device, which streamlines SOA infrastructures; an XML accelerator, which offloads XML processing; and an XML security gateway, which helps provide message-level, Web-services security. A security gateway helps to ensure that only authorized applications are allowed to access the data, whether an IP meter or an IED. SOA appliance security features complement the SOA security management capabilities of software.

Proper architectures could address dynamic, trusted virtual security domains, and be combined not only with intrusion protection systems, but anomaly detection systems. If hackers can introduce viruses in data (such as malformed video images that leverage faults in media players), then similar concerns should be under discussion with smart grid data. Is messing with 300 MegaWatts (MW) of demand response much different than cyber attacking a 300 MW generator?

Analytics

A smart grid cynic might say, “Who is going to look at all of this new data?” That is where analytics supports the processing, interpretation and correlation of the flood of new grid observations. One part of the analytics would be performed by existing applications. This is where data models and integration play a key role. Another part of the analytics dimension is with new applications and the ability of engineers to use a workbench to create their customized analytics dashboard in a self-service model.

Many utilities have power system engineers in a back office using spreadsheets; part of the smart grid concept is that all data is available to the community to use modern tools to analyze and predict grid operation. Analytics may need a dedicated data bus, separate from an enterprise service bus (ESB) or enterprise SOA bus, to meet the timeliness and quality of service to support operational analytics.

A two-tier or three-tier (if one considers the substations) bus is an architectural approach to segregate data by speed and still maintain interconnections that support a holistic view of the operation. Connections to standard industry tools such as ABB’s NEPLAN® or Siemens Power Technologies International PSS®E, or general tools such as MatLab, should be considered at design time, rather than as an additional expense commitment after smart grid commissioning.

Integration

Once data is sensed, securely communicated, modeled and analyzed, the results need to be applied for business optimization. This means new smart grid data gets integrated with existing applications, and metadata locked in legacy systems is made available to provide meaningful context.

This is typically accomplished by enabling systems as services per the classic SOA model. However, issues of common data formats, data integrity and name services must be considered. Data integrity includes verification and cross-correlation of information for validity, and designation of authoritative sources and specific personnel who own the data.

Name services addresses the common issue of an asset – whether transformer or truck – having multiple names in multiple systems. An example might be a substation that has a location name, such as Walden; a geographic information system (GIS) identifier such as latitude and longitude; a map name such as nearest cross streets; a capital asset number in the financial system; a logical name in the distribution system topology; an abbreviated logical name to fit in the distribution management system graphical user interface (DMS GUI); and an IP address for the main network router in the substation.

Different applications may know new data by association with one of those names, and that name may need translation to be used in a query with another application. While rewriting the applications to a common model may seem appealing, it may very well send a CIO into shock. While the smart grid should help propagate intelligence throughout the utility, this doesn’t necessarily mean to replace everything, but it should “information-enable” everything.

Interoperability is essential at both a service level and at the application level. Some vendors focus more at the service, but consider, for example, making a cell phone call from the U.S. to France – your voice data may well be code division multiple access (CDMA) in the U.S., travel by microwave and fiber along its path, and emerge in France in a global system for mobile (GSM) environment, yet your speech, the “application level data,” is retained transparently (though technology does not yet address accents!).

Hardware

The world of computerized solutions does not speak to software alone. For instance, AMI storage consolidation addresses the concern that the volume of data coming into the utility will be increasing exponentially. As more meter data can be read in an on-demand fashion, data analytics will be employed to properly understand it all, requiring a sound hardware architecture to manage, back-up and feed the data into the analytics engines. In particular, storage is needed in the head-end systems and the meter-data management systems (MDMS).

Head-end systems pull data from the meters to provide management functionality while the MDMS collects data from head-end systems and validates it. Then the data can be used by billing and other business applications. Data in both the head-end systems and the master copy of the MDMS is replicated into multiple copies for full back up and disaster recovery. For MDMS, the master database that stores all the aggregated data is replicated for other business applications, such as customer portal or data analytics, so that the master copy of the data is not tampered with.

Since smart grid is essentially performing in real time, and the electricity business is non-stop, one must think of hardware and software solutions as needing to be fail-safe with automated redundancy. The AMI data especially needs to be reliable. The key factors then become: operating system stability; hardware true memory access speed and range; server and power supply reliability; file system redundancy such as a JFS; and techniques such as FlashCopy to provide a point-in-time copy of a logical drive.

Flash Copy can be useful in speeding up database hot backups and restore. VolumeCopy can extend the replication functionality by providing the ability to copy contents of one volume to another. Enhanced remote mirroring (Global Mirror, Global Copy and Metro Mirror) can provide the ability to mirror data from one storage system to another, over extended distances.

Conclusion

Those are seven key ingredients for designing or evaluating a recipe for success with regard to implementing the smart grid at your utility. Addressing these dimensions will help achieve a solid foundation for a comprehensive smart grid computing system architecture.

Thinking Smart

For more than 30 years, Newton- Evans Research Company has been studying the initial development and the embryonic and emergent stages of what the world now collectively terms the smart, or intelligent, grid. In so doing, our team has examined the technology behind the smart grid, the adoption and utilization rates of this technology bundle and the related market segments for more than a dozen or so major components of today’s – and tomorrow’s – intelligent grid.

This white paper contains information on eight of these key components of the smart grid: control systems, smart grid applications, substation automation programs, substation IEDs and devices, advanced metering infrastructure (AMI) and automated meter-reading devices (AMR), protection and control, distribution network automation and telecommunications infrastructure.

Keep in mind that there is a lot more to the smart grid equation than simply installing advanced metering devices and systems. A large AMI program may not even be the correct starting point for hundreds of the world’s utilities. Perhaps it should be a near-term upgrade to control center operations or to electronic device integration of the key substations, or an initial effort to deploy feeder automation or even a complete production and control (P&C) migration to digital relaying technology.

There simply is not a straightforward roadmap to show utilities how to develop a smart grid that is truly in that utility’s unique best interests. Rather, each utility must endeavor to take a step back and evaluate, analyze and plan for its smart grid future based on its (and its various stakeholders’) mission, its role, its financial and human resource limitations and its current investment in modern grid infrastructure and automation systems and equipment.

There are multiple aspects of smart grid development, some of which involve administrative as well as operational components of an electric power utility, and include IT involvement as well as operations and engineering; administrative management of customer information systems (CIS) and geographic information systems (GIS) as well as control center and dispatching operation of distribution and outage management systems (DMS and OMS); substation automation as well as true field automation; third-party services as well as in-house commitment; and of course, smart metering at all levels.

Space Station

I have often compared the evolution of the smart grid to the iterative process of building the international space station: a long-term strategy, a flexible planning environment, responsive changes incorporated into the plan as technology develops and matures, properly phased. What function we might need is really that of a skilled smart grid architect to oversee the increasingly complex duties of an effective systems planning organization within the utility organization.

All of these soon-to-be-interrelated activities need to be viewed in light of the value they add to operational effectiveness and operating efficiencies as well as the effect of their involvement with one another. If the utility has not yet done so, it must strive to adopt a systems-wide approach to problem solving for any one grid-related investment strategy. Decisions made for one aspect of control and automation will have an impact on other components, based on the accumulated 40 years of utility operational insights gained in the digital age.

No utility can today afford to play whack-a-mole with its approach to the intelligent grid and related investments, isolating and solving one problem while inadvertently creating another larger or more costly problem elsewhere because of limited visibility and “quick fix” decision making.

As these smart grid building blocks are put into service, as they become integrated and are made accessible remotely, the overall smart grid necessarily becomes more complex, more communications-centric and more reliant on sensor-based field developments.

In some sense, it reminds one of building the space station. It takes time. The process is iterative. One component follows another, with planning on a system-wide basis. There are no quick solutions. Everything must be very systematically approached from the outset.

Buckets of Spending

We often tackle questions about the buckets of spending for smart grid implementations. This is the trigger for the supply side of the smart grid equation. Suppliers are capable of developing, and will make the required R&D investment in, any aspect of transmission and distribution network product development – if favorable market conditions exist or if market outlooks can be supported with field research. Hundreds of major electric power utilities from around the world have already contributed substantially to our ongoing studies of smart grid components.

In looking at the operational/engineering components of smart grid developments, centering on the physical grid itself (whether a transmission grid, a distribution grid or both), one must include what today comprises P&C, feeder and switch automation, control center-based systems, substation measurement and automation systems, and other significant distribution automation activities.

On the IT and administrative side of smart grid development, one has to include the upgrades that will definitely be required in the near- or mid-term, including CIS, GIS, OMS and wide area communications infrastructure required as the foundation for automatic metering. Based on our internal estimates and those of others, spending for grid automation is pegged for 2008 at or slightly above $1 billion nationwide and will approach $3.5 billion globally. When (if) we add in annual spending for CIS, GIS, meter data management and communications infrastructure developments, several additional billions of dollars become part of the overall smart grid pie.

In a new question included in the 2008 Newton-Evans survey of control center managers, these officials were asked to check the two most important components of near-term (2008-2010) work on the intelligent grid. A total of 136 North American utilities and nearly 100 international utilities provided their comments by indicating their two most important efforts during the planning horizon.

On a summary basis, AMI led in mentions from 48 percent of the group. EMS/ SCADA investments in upgrades, new applications, interfaces et al was next, mentioned by 42 percent of the group. Distribution automation was cited by 35 percent as well.

Spending Outlook

The financial environment and economic outlook do not bode well for many segments of the national and global economies. One question we have continuously been asked well into this year is whether the electric power industry will suffer the fate of other industries and significantly scale back planned spending on T&D automation because of possible revenue erosion given the slowdown and fallout from this year’s difficult industrial and commercial environments.

Let’s first take a summary look at each of the five major components of T&D automation because these all are part and parcel of the operations/engineering view of the smart grid of the future.

Control Systems Outlook: Driven by SCADA-like systems and including energy management systems and distribution management software, this segment of the market is hovering around the $500 million mark on a global scale – excluding the values of turn-key control center projects (engineering, procurement and construction (EPC) of new control center facilities and communications infrastructure). We see neither growth nor erosion in this market for the near-term, with some up-tick in spending for new applications software and better visualization tools to compensate for the “aging” of installed systems. While not a control center-based system, outage management is a closely aligned technology development, and will continue to take hold in the global market. Sales of OMS software and platforms are already approaching the $100 million mark led by the likes of Oracle Utilities, Intergraph and MilSoft.

Substation Automation and Integration Programs: The market for substation IEDs, for new communications implementations and for integration efforts has grown to nearly $500 million. Multiyear programs aimed at upgrading, integrating and automating the existing global base of about a quarter million or so transmission and primary distribution substations have been underway for some time. Some programs have been launched in 2008 that will continue into 2011. We see a continuation of the growth in spending for critical substation A&I programs, albeit 2009 will likely see the slowest rate of growth in several years (less than 3 percent) if the current economic malaise holds up through the year. Continuing emphasis will be on HV transmission substations as the first priority for upgrades and addition of more intelligent electronic devices.

AMI/AMR: This is the lynchpin for the smart grid in the eyes of many industry observers, utility officials and perhaps most importantly, regulators at the state and federal levels of the U.S., Canada, Australia and throughout Western Europe. With nearly 1.5 billion electricity meters installed around the world, and about 93 percent being electro-mechanical, interest in smart metering can also be found in dozens of other countries, including Indonesia, Russia, Honduras, Malaysia, Australia, and Thailand. Another form of smart meters, the prepayment meter, is taking hold in some of the developing nations of the world. The combined resources of Itron, coupled with its Actaris acquisition, make this U.S. firm the global share leader in sales and installations of AMI and AMR systems and meters.

Protection and Control: The global market for protective relays, the foundation for P&C has climbed well above $1.5 billion. Will 2009 see a drop in spending for protective relays? Not likely, as these devices continue to expand in capabilities, and undertake additional functions (sequence of event recording, fault recording and analysis, and even acting as a remote terminal unit). To the surprise of many, there is still a substantial amount (perhaps as much as $125 million) being spent annually for electro-mechanical relays nearly 20 years into the digital relay era. The North American leader in protective relay sales to utilities is SEL, while GE Multilin continues to hold a leading share in industrial markets.

Distribution Automation: Today, when we discuss distribution automation, the topic can encompass any and all aspects of a distribution network automation scheme, from the control center-based SCADA and distribution management system on out to the substation, where RTUs, PLCs, power meters, digital relays, bay controllers and a myriad of communicating devices now help operate, monitor and control power flow and measurement in the medium voltage ranges.

Nonetheless, it is beyond the substation fence, reaching further down into the primary and secondary network, where we find reclosers, capacitors, pole top RTUs, automated overhead switches, automated feeders, line reclosers and associated smart controls. These are the new smart devices that comprise the basic building blocks for distribution automation. The objective will be achieved with the ability to detect and isolate faults at the feeder level, and enable ever faster service restoration. With spending approaching $1 billion worldwide, DA implementations will continue to expand over the coming decade, nearing $2.6 billion in annual spending by 2018.

Summary

The T&D automation market and the smart grid market will not go away this year, nor will it shrink. When telecommunications infrastructure developments are included, about $5 billion will have been spent in 2008 for global T&D automation programs. When AMI programs are adding into the mix, the total exceeds $7 billion. T&D automation spending growth will likely be subdued, perhaps into 2010. However, the overall market for T&D automation is likely to be propped up to remain at or near current levels of spending for 2009 and into 2010, benefiting from the continued regulatory-driven momentum for AMI/ AMR, renewable portfolio standards and demand response initiatives. By 2011, we should once again see healthier capital expenditure budgets, prompting overall T&D automation spending to reach about $6 billion annually. Over the 2008-2018 periods, we anticipate more than $75 billion in cumulative smart grid expenditures.

Expenditure Outlook

Newton-Evans staff has examined the current outlook for smart grid-related expenditures and has made a serious attempt to avoid double counting potential revenues from all of the components of information systems spending and the emerging smart grid sector of utility investment.

While the enterprise-wide IT portions (blue and red segments) of Figure 1 include all major components of IT (hardware, software, services and staffing), the “pure” smart grid components tend to be primarily in hardware, in our view. Significant overlap with both administrative and operational IT supporting infrastructure is a vital component for all smart grid programs underway at this time.

Between “traditional IT” and the evolving smart grid components, nearly $25 billion will likely be spent this year by the world’s electric utilities. Nearly one-third of all 2009 information technology investments will be “smart grid” related.

By 2013, the total value of the various pie segments is expected to increase substantially, with “smart grid” spending possibly exceeding $12 billion. While this amount is generally understood to be conservative, and somewhat lower than smart grid spending totals forecasted by other firms, we will stand by our forecasts, based on 31 years of research history with electric power industry automation and IT topics.

Some industry sources may include the total value of T&D capital spending in their smart grid outlook.

But that portion of the market is already approaching $100 billion globally, and will likely top $120 billion by 2013. Much of that market would go on whether or not a smart grid is involved. Clearly, all new procurements of infrastructure equipment will be made with an eye to including as much smart content as is available from the manufacturers and integrators.

What we are limiting our definition to is edge investment, the components of the 21st century digital transport and delivery systems being added on or incorporated into the building blocks (power transformers lines, switchgear, etc.) of electric power transmission and delivery.

Meeting Future Utility Operating Challenges With a Smart Grid

The classical school of utility operations prescribes four priorities, ranked in the following descending order: safety, reliability, customer service and profit. Although it’s not hard to engage any number of industry insiders in an argument over whether profit in the classical model has recently switched places with customer service (and/or whether it should), most people accept that safety and reliability still reign supreme when it comes to operating a utility. This is true whether one takes a policy-, economic-, utility- or customer-oriented perspective.

Over many decades the utility industry has established a remarkably consistent pattern of power delivery based on the above-described priorities. Large, centralized generation facilities produce electricity from various sources interconnected via a networked transmission system feeding a predominantly radial distribution system. This classical power distribution system supports a predictable demand pattern that utilities can typically manage by using analytics such as similar day load forecasting. Moreover, future demand is also predictable, since average loads have been growing consistently by just a few percentage points annually, year in and year out.

To support this power delivery model, utilities also employ remarkably consistent system design and operational processes. Although any given utility might employ slightly different processes and procedures at varying degrees of efficiency and effectiveness – or deploy operating assets with slightly different design specifications – the underlying elements are generally consistent from one utility to another. They are engineered to either fail safe (safety) and/or not to fail at all (reliability) based on long-term operating patterns.

So why implement a smart grid? After all, the classical method of managing supply and demand has worked reasonably well over the decades. The system is safe and reliable, and most utilities are very profitable even in economic downtimes. However, a smart grid has three interrelated attributes – transparency, conditionality and kinematics – that together radically improve the “situational awareness” of the real-time state of the grid for both utilities and customers.

With this situational awareness comes the high system-state observability (transparency) that drives conditional management (conditionality) of the grid. All of this will ultimately support future power delivery patterns, which will be much more complex and difficult to predict and manage because demand and supply will fluctuate much more radically than at present (kinematics).

TRANSPARENCY

Price transparency is the foundation on which deregulated and competitive markets are built. However, until now price transparency has been limited primarily to wholesale transmission and generation domains. Indeed, the lack of price transparency at the point of distribution (that is, at retail) is a key reason deregulation has stalled in the United States.

Price transparency is of course only one aspect of the issue. Utilities must also synchronize usage transparency with price transparency based on time. That is, the value of knowing real-time pricing is diminished if a customer cannot also see their real-time usage and make energy usage behavior changes in relation to the real-time price signals.

From the utility’s perspective, usage transparency is limited. That’s because the distribution elements of most utility operations are largely opaque to operators. Once beyond the substation, usage disruptions are primarily identified by induction from fault conditions and usage patterns recorded a month after the disruption occurred via meter readings. For example, a distribution circuit may be substantially overloaded, but in most cases the utility won’t know until it fails. And when a failure does occur, utilities still depend on manual processes to determine the precise location and cause of the fault. The customer loads or network conditions that precipitated the failure can only be analyzed well after the event.

A smart grid significantly improves the level of visibility into the distribution grid. Smart meters, line sensors and the embedded processing that takes place within system assets such as switches and reclosers all provide a stream of real-time and near real-time data to the utility about the current operational state of the grid. The result: a dramatic improvement in utilities’ awareness of the state of the distribution grid.

CONDITIONALITY

As is the case with transparency, the consumer’s perspective of conditionality is more mature than the utility’s perspective. For example, the idea of the smart building is all about implementing a mini premise-side smart grid within the customer location and installing simple devices such as motion detectors that turn lights on or off in a room. Commercial energy management systems use even more sophisticated ways of optimizing the lighting, heating and other environmental parameters of a work or living space.

From the utility’s perspective, however, conditionality is much less advanced. In today’s operating world, most maintenance or repair activities take place either too late or too soon. When utilities wait until something in the infrastructure fails, it’s too late. If the grid is inspected based on some set time schedule irrespective of its condition, it’s too soon. Utilities thus fall into a pattern of either fault- or usage-based maintenance.

The alternative – condition-based maintenance – is already being used in many industries. The difference in the utilities industry is that outside of energy generation and transmission activities, there’s little data on the ongoing real-time condition of most of the assets a utility utilizes to provide its customers with service.

The chief benefit of conditionality is that it allows utilities to optimize asset utilization in both over- and under-use situations (Figure 1).

Conditionality also opens up opportunities for utilities to fully automate their utility distribution operations. Not only will this enable them to provide more reliable service to customers, it reduces the need for human intervention and thus dramatically cuts labor costs. In addition, automation can be used to mitigate the utilities industry’s looming problem of an aging workforce. For these and other reasons, conditionality is one of the most important contributions the smart grid will make to the industry.

KINEMATICS

In classical physics, kinematics studies how the position of an object changes with time. In today’s utility operations, neither load nor supply is particularly kinematical because changes to either take a long time and occur slowly (in normal operating conditions) and both can be reliably predicted.

Many industry observers, however, believe that this scenario is about to change dramatically. One thing that’s expected to drive this change is “distributed generation.” Under this scenario, instead of relying on large centralized generation, the industry will see significant growth in distribution-side generation technologies. Unlike today, much of this supply will not be centrally dispatched or under direct central control. The resulting energy supply will be much more complex to predict and manage. To the futurist this may seem like an exciting prospect, but to a grid operator or a utility, this represents a control and management nightmare, because it directly challenges the operational priorities of safety and reliability.

Hybrid and electric automobiles will also substantially alter the pattern of supply and load on the current grid. According to some predictions, electric automobiles will account for upwards of 20 percent of the automobile fleet in the United States in the coming decades. This means that millions of automobiles charging each night could increase customer load profiles over time by upwards of 30 to 50 percent. When coupled with even more futuristic ideas such as “vehicle to grid,” you end up with energy consumption scenarios that no one imagined when the grid was built.

CONCLUSION

The three attributes of the smart grid – transparency, conditionality and kinematics – are interrelated. Transparency provides situational awareness, which enables conditionality. And conditionality likewise is a requirement for managing the kinematic supply and load patterns of the future. But more importantly, the smart grid is the only way the classical operating priorities of the system can be sustained – or enhanced – given the upcoming expected changes to the industry.

Making Change Work: Why Utilities Need Change Management

Many times organizations are reluctant to engage change management programs, plans and teams. More often, change management programs are launched too late in the project process, are only moderately funded or are absorbed within the team as part-time responsibilities – all of which we’ve seen happen time and again in the utility industry.

“Making Change Work,” an IBM study done in collaboration with the Center of Evaluation and Methods at Bonn University, analyzed the factors for successful implementation of change. The scope of this study, released in 2007, is now being expanded because the project management and change management professions, formerly aligned, are now at a turning point of differentiation. The reason is simple: too many projects fail to consider both components as critical to success – and therefore lack insight into the day-today impact of a change on members of the organization.

Despite this, many organizations have been reluctant to implement change management programs, plans and teams. And when they have put such programs in place, the programs tend to be launched too late in the project process, are inadequately funded or are perceived as part-time tasks that can be assigned to members of the project management team.

WHAT IS CHANGE MANAGEMENT?

Change management is a structured approach to business transformation that manages the transition from a current state to a desired future state. Far from being static or rigid, change management is an ever-evolving program that varies with the needs of the organization. Effective change management involves people and provides open communication.

Change management is as important as project management. However, whereas project management is a tactical activity, change management represents a strategic initiative. To understand the difference, consider the following

  • Change management is the process of driving corporate strategy by identifying, addressing and managing barriers to change across the organization or enterprise.
  • Project management is the process of implementing the tools needed to enable or mobilize the corporate strategy.

Change management is an ongoing process that works in close concert with project management. At any given time at least one phase of change management should be occurring. More likely, multiple phases will be taking place across various initiatives.

A change management program can be tailored to manage the needs of the organizational culture and relationships. The program must close the gaps among workforce, project team and sponsor leadership during all phases of all projects. It does this by:

  • Ensuring proper alignment of the organization with new technology and process requirements;
  • Preparing people for new processes and technology through training and communication;
  • Identifying and addressing human resource implications such as job definitions, union negotiations and performance measures;
  • Managing the reaction of both individuals and the entire organization to change; and
  • Providing the right level of support for ongoing implementation success.

The three fundamental activities of a change management program are leading, communicating and engaging. These three activities should span the project life cycle to maintain both awareness of the change and its momentum (Figure 1).

KEY ELEMENTS OF A CHANGE PROGRAM

There are three best practice elements that make the difference between successful projects and less successful projects: [1]

Organizational awareness for the challenges inherent in any change. This involves the following:

  • Getting a real understanding of – and leadership buy-in to – the stakeholders and culture;
  • Recognizing the interdependence of strategy and execution;
  • Ensuring an integrated strategy approach linking business strategy, operations, organization design and change and technology strategy; and
  • Educating leadership on change requirements and commitment.

Consistent use of formal methods for change management. This should include:

  • Covering the complete life cycle – from definition to deployment to post-implementation optimization;
  • Allowing for easy customization and flexibility through a modular design;
  • Incorporating change management and value realization components into each phase to increase the likelihood of success; and
  • Providing a published plan with ongoing accountability and sponsorship as well as continuous improvement.

A specified share of the project budget that is invested in change management. This should involve:

  • Investing in change linked to project success. Projects that invest more than 10 percent of the project budget have an average of 45 percent success (Figure 2). [2]
  • Assigning the right resources to support change management early on and maintaining the required support. This also limits the adverse impacts of change on an organization’s productivity (Figure 3). [3]

WHY DO UTILITIES NEED CHANGE MANAGEMENT?

Utilities today face a unique set of challenges. For starters, they’re simultaneously dealing with aging infrastructures and aging workforces. In addition, there are market pressures to improve performance, become more “green” and mitigate rising energy costs. To address these realities, many utilities are seeking mergers and acquisition (M&A) opportunities as well as implementing new technologies.

The cost cutting of the past decade combined with M&As has left utilities with gaps in workforce experience as well as budget challenges. Yet utilities are facing major business disruptions going into the next decade and beyond. To cope with these disruptions, companies are implementing new technologies such as the intelligent grid, advanced metering infrastructure (AMI), meter data management (MDM), enterprise asset management (EAM) and work management systems (WMS’s). It’s not uncommon for utilities to be implementing multiple new systems simultaneously that affect the day-to-day activities of people throughout the organization, from frontline workers to senior managers.

A change management program can address a number of challenges specific to the utilities industry.

CULTURAL CLIMATE: ‘BUT WE’RE DIFFERENT’

A utility is a utility is a utility. But a deeper look into individual businesses reveals nuances in their relationships with both internal and external stakeholders that are unique to each company. A change management team must intimately understand these relationships. For example, externally how is the utility perceived by regulators, customers, the community and even analysts? As for internal relationships, how do various operating divisions relate and work together? Some operating divisions work well together on project teams and respect each other and their differences; others do not.

There may be cultural differences, but work is work. Only change management can address these relationships. Knowing the utility’s cultural climate and relationships will help shape each phase of the change management program, and allow change management professionals to customize a project or system implementation to fit a company’s culture.

REGULATORY LANDSCAPE

With M&As and increasing market pressures across the United States, the regulatory landscape confronting utilities is becoming more variable. We’ve seen several types of regulatory-related challenges.

Regulatory pressure. Whether regulators mandate or simply encourage new technology implementations can make a significant difference in how stakeholders in a project behave. In general, there’s more resistance to a new technology when it’s required versus voluntarily implemented. Change management can help work through participant behaviors and mitigate obstacles so that project work can continue as planned.

Multiple regulatory jurisdictions. Many utilities with recently expanded footprints following M&As now have to manage requests from and expectations of multiple regulatory commissions. Often these commissions have different mandates. Change management initiatives are needed to work through the complexity of expectations, manage multiple regulatory relationships and drive utilities toward a unified corporate strategy.

Regulatory evolution. Just as markets evolve, so do regulatory influences and mandates. Often regulators will issue orders that can be interpreted in many ways. They may even do this to get information in the form of reactions from their various constituents. Whatever the reason, the reality is that utilities are managing an ever-changing portfolio of regulations. Change management can better prepare utilities for this constant change.

OPERATIONS MATURITY

When new systems and technologies being implemented encompass multiple operating divisions, it can be difficult for stakeholders to agree on operating standards or processes. Project team members representing the various operating regions can resist compromise for fear of losing control. This often occurs when utilities are attempting to integrate systems across operating regions following an acquisition.

Change management helps ensure that various constituents – for example, the regional operating divisions – are prepared for eminent business transformation. In large organizations, this preparation period can take a year or more. But for organizations to realize the benefits of new systems and technology implementations, they must be ready to receive the benefits. Readiness and preparedness are largely the responsibilities of the change management team.

ORGANIZATIONAL COHESIVENESS

The notion of organizational cohesiveness is that across the organization all constituents are equally committed to the business transformation initiative and have the same understanding of the overarching corporate strategy while also performing their individual roles and responsibilities.

Senior executives must align their visions and common commitment to change. After all, they set the tone for change through their respective organizations. If they are not in sync with each other, their organizations become silos, and business processes are less likely to be fluid across organizational boundaries. Frontline managers and associates must, in turn, be engaged and enthusiastic about the transformations to come.

Organizational cohesiveness is especially critical during large systems implementations involving utility field operations. Leaders at multiple locations must be ready to communicate and support change – and this support must be visible to the workforce. Utilities must understand this requirement at the beginning of a project to make change manageable, realistic and personal enough to sustain momentum. All too often, we’ve heard team members comment, “We had a lot of leadership at the project kickoff, but we really haven’t seen leadership at any of our activities or work locations since then. The project team tells us what to do.”

Moreover, leadership – when removed from the project – usually will not admit that they’re in the dark about what’s going on. Yet their lack of involvement will not escape the attention of frontline employees. Once the supervisor is perceived as lacking information – and therefore power – it’s all over. Improving customer service and quality, cutting costs and adopting new technology-merging operations all require changing employees. [4]

For utilities, the concept of organizational cohesiveness is especially important because just as much technology “lives” outside IT as inside. Yet the engineers who use this non-IT-controlled technology – what Gartner calls “operations technology” – are usually disconnected from the IT world in terms of both practical planning and execution. However, these worlds must act as one for a company to be truly agile. [5]

Change management methods and tools ensure that organization cohesiveness exists through project implementation and beyond.

UNION ENGAGEMENT

Successful change occurs with a sustained partnership among union representatives throughout the project life cycle. Project leadership and union leadership must work together and partner to implement change. Union representation should be on the project team. Representatives can be involved in process reviews, testing and training, or asked to serve as change champions. In addition, communication is critical throughout all phases of a project. Frontline employees must see real evidence of how this change will benefit them. Change is personal: everyone wants to know how his or her job will be impacted.

There should also be union representation in training activities, since workers tend to be more receptive to peer-to-peer support. Utilities should, for example, engage union change champions to help co-workers during training and to be site “go to” representatives. Utilities should also provide advance training and recognize all who participate in it.

Union representatives should also participate in design and/or testing, since they will be able to pinpoint issues that will impact routine daily tasks. It could be something as simple as changing screen labels per their recommendation to increase user understanding.

More than one union workforce may be involved in a project. Location cultures that exist in large service territories or that have resulted from mergers may try to isolate themselves from the project team and resist change. Utilities should assemble a team from various work groups and then do the following to address the history and differences in the workforce:

  • Request ongoing union participation throughout the life of the project.
  • Include union roles as part of the project charter and define these roles with union leadership.
  • Provide a kickoff overview to union leadership.
  • Include union representation in work process development with balanced representation from various areas. Union employees know the job and can quickly identify the pros and cons of work tasks. A structured facilitation process and issue resolution process is required.
  • Assign a corporate human resource or labor relations role to review processes that impact the union workforce.
  • Develop communication campaigns that address union concerns, such as conducting face-to-face presentations at employing locations and educating union leaders prior to each change rollout.
  • Involve union representatives in training and user support.

Change management is necessary to sort through the relationships of multiple union workforces so that projects and systems can be implemented.

AN AGING WORKFORCE

A successful change management program will help mitigate the aging workforce challenges utilities will be facing for many years to come.

WHAT TO EXPECT FROM A SUCCESSFUL CHANGE MANAGEMENT PROGRAM

The result of a successful change management program is a flexible organization that’s responsive to customer needs, regulatory mandates and market pressures, and readily embraces new technologies and systems. A change-ready organization anticipates, expects and is increasingly comfortable with change and exhibits the following characteristics:

  • The organization is aligned.
  • The leaders are committed.
  • Business processes are developed and defined across all operational units.
  • Associates at all levels have received communications and have continued access to resources.

Facing major business transformations and unique industry challenges, utilities cannot afford not to engage change management programs. This skill set is just as critical as any other role in your organization. Change is a cost. Change should be part of the project budget.

Change is an ongoing, long-term investment. Good change management designed specifically for your culture and challenges minimizes change’s adverse effect on daily productivity and helps you reach and sustain project goals.

ENDNOTES

  1. “Making Change Work” (an IBM study), Center of Evaluation and Methods, Bonn University, 2007; excerpts from “IBM Integrated Strategy and Change Methodology,” 2007.
  2. “Making Change Work,” Center of Evaluation and Methods, Bonn University, 2007.
  3. Ibid.
  4. T.J. Larkin and Sandar Larkin, “Communicating Change: Winning Employee Support for New Business Goals,” McGraw Hill, 1994, p. 31.
  5. K. Steenstrup, B. Williams, Z. Sumic, C. Moore; “Gartner’s Energy and Utilities Summit: Agility on Both Sides of the Divide”; Gartner Industry Research ID Number G00145388; Jan. 30, 2007; p. 2.
  6. P. R. Bruffy and J. Juliano, “Addressing the Aging Utility Workforce Challenge: ACT NOW,” Montgomery Research 2006 journal.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

  1. Gartner’s 2006 CIO Agenda survey.
  2. 2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.

Advanced Metering Infrastructure: The Case for Transformation

Although the most basic operational benefits of an advanced metering infrastructure (AMI) initiative can be achieved by simply implementing standard technological features and revamping existing processes, this approach fails to leverage the full potential of AMI to redefine the customer experience and transform the utility operating model. In addition to the obvious operational benefits – including a significant reduction in field personnel and a decrease in peak load on the system – AMI solutions have the potential to achieve broader strategic, environmental and regulatory benefits by redefining the utility-customer relationship. To capture these broader benefits, however, utilities must view AMI as a transformation initiative, not simply a technology implementation project. Utilities must couple their AMI implementations with a broader operational overhaul and take a structured approach to applying the operating capabilities required to take advantage of AMI’s vast opportunities. One key step in this structured approach to transformation is enterprise-wide business process design.

WHY “AS IS” PROCESSES WON’T WORK FOR AMI

Due to the antiquated and fragmented nature of utility processes and systems, adapting “as is” processes alone will not be sufficient to realize the full range of AMI benefits. Multiple decades of industry consolidation have resulted in utilities with diverse business processes reflecting multiple legacy company operating practices. Associated with these diverse business processes is a redundant set of largely homegrown applications resulting in operational inefficiencies that may impact customer service and reliability, and prevent utilities from adapting to new strategic initiatives (such as AMI) as they emerge.

For example, in the as-is environment, utilities are often slow to react to changes in customer preferences and require multiple functional areas to respond to a simple customer request. A request by a customer to enroll in a new program, for example, will involve at least three organizations within the utility: the call center initially handles the customer request; the field services group manages changing or reprogramming the customer’s meter to support the new program; and the billing group processes the request to ensure that the customer is correctly enrolled in the program and is billed accordingly. In most cases, a simple request like this can result in long delays to the customer due to disjointed processes with multiple hand-off points.

WHY USE AMI AS THE CATALYST FOR OPERATIONAL TRANSFORMATION?

The revolutionary nature of AMI technology and its potential for application to multiple areas of the utility makes an AMI implementation the perfect opportunity to adapt the utility operating structure. To use AMI as a platform for operational transformation, utilities must shift their thought paradigm from functionally based to enterprise-wide, process-centric environments. This approach will ensure that utilities take full advantage of AMI’s technological capabilities without being constrained by existing processes and organizational structures.

If the utility is to offer new programs and services as well as respond to shifting external demands, it must anticipate and respond quickly to changes in behaviors. Rapid information dissemination and quick response to changes in business, environmental and economic situations are essential for utilities that wish to encourage customers to think of energy in a new way and proactively manage their usage through participation in time-of-use and real-time demand response programs. This transition requires that system and organizational hand-offs be integrated to create a seamless and flexible work flow. Without this integration, utilities cannot proactively and quickly adapt processes to satisfy ever-increasing customer expectations. In essence, AMI fails if “smart meters” and “smart systems” are implemented without “smart processes” to support them.

DESIGNING SMART PROCESSES

Designing smart future state business processes to support transformational initiatives such as AMI involves more than just rearranging existing works flows. Instead, a utility must adopt a comprehensive approach to business process design – one that engages stakeholders throughout the organization and that enables them to design processes from the ground up. The utility must also design flexible processes that can adapt to changing customer, technology, business and regulatory expectations while avoiding the pitfalls of the current organization and process structure. As part of a utility’s business process design effort, it must also redefine jobs more broadly, increase training to support those jobs, enable decision making by front-line personnel and redirect rewards systems to focus on processes as well as outcomes. Utilities must also reshape organizational cultures to emphasize teamwork, personnel accountability and the customer’s importance; to redefine roles and responsibilities so that managers oversee processes instead of activities and develop people rather then supervise them; and to realign information system so that they help cross-functional processes work smoothly rather than simply support individual functional areas.

BUSINESS PROCESS DESIGN FRAMEWORK

IBM’s enterprise-wide business process design framework provides a structured approach to the development of the future state processes that support operational transformations and the complexities of AMI initiatives. This framework empowers utilities to apply business process design as the cornerstone of a broader effort to transition to a customer-centric organization capable of engaging external stakeholders. In addition, this framework also supports corporate decision making and continuous improvement by emphasizing real-time metrics and measurement of operational procedures. The framework is made up of the following five phases (Figure 1):

Phase 1 – As-is functional assessment. During this phase, utilities assess their current state processes and supporting organizations and systems. The goal of this phase is to identify gaps, overlaps and conflicts with existing processes and to identify opportunities to leverage the AMI technology. This assessment requires utility stakeholders to dissect existing process throughout the organization and identify instances where the utility is unable to fully meet customer, environmental and regulatory demands. The final step in this phase is to define a set of “future state” goals to guide process development. These goals must address all of the relevant opportunities to both improve existing processes and perform new functions and services.

Phase 2 – Future state process analysis. During this phase, utilities design end-to-end processes that meet the future state goals defined in Phase 1. To complete this effort, utilities must synthesize components from multiple functional areas and think outside the current organizational hierarchy. This phase requires engagement from participants throughout the utility organization, and participants should be encouraged to envision all relevant opportunities for using AMI to improve the utility’s relationship with customers, regulators and the environment. At the conclusion of this phase, all processes should be assessed in terms of their ability to alleviate the current state issues and to meet the future state goals defined in Phase 1.

Phase 3 – Impact identification. During this phase, utilities identify the organizational structure and corporate initiatives necessary to “operationalize” the future state processes. Key questions answered during this phase include how will utilities transition from current to future state? How will each functional area absorb the necessary changes? And what are the new organizations, roles and skills needed? This phase requires the utility to think outside of the current organizational structure to identify the optimal way to support the processes designed in Phase 2. During the impact identification phase of business, it’s crucial that process be positioned as the dominant organizational axis. Because process-organized utilities are not bound to a conventional hierarchy or fixed organizational structure, they can be customer-centric, make flexible use of their resources and respond rapidly to new business situations.

Phase 4 – Socialization. During this phase, utilities focus on obtaining ownership and buy-in from the impacted organizations and broader group of internal and external stakeholders. This phase often involves piloting the new processes and technology in a test environment and reaching out to a small set of customers to solicit feedback. This phase is also marked by the transition of the products from the first three phases of the business process design effort to the teams affected by the new processes – namely the impacted business areas as well as the organizational change management and information technology teams.

Phase 5 – Implementation and measurement. During the final phase of the business process design framework, the utility transitions from planning and design to implementation. The first step of this phase is to define the metrics and key performance indicators (KPIs) that will be used to measure the success of the new processes – necessary if organizations and managers are to be held responsible for the new processes, and for guiding continuous refinement and improvement. After these metrics have been established, the new organizational structure is put in place and the new processes are introduced to this structure.

BENEFITS AND CHALLENGES OF BUSINESS PROCESS DESIGN

The business process design framework outlined above facilitates the permeation of the utility goals and objectives throughout the entire organization. This effort does not succeed, though, without significant participation from internal stakeholders and strong sponsorship from key executives.

The benefits of this approach include the following:

  • It facilitates ownership. Because the management team is engaged at the beginning of the AMI transformation, managers are encouraged to own future state processes from initial design through implementation.
  • It identifies key issues. A comprehensive business design effort allows for earlier visibility into key integration issues and provides ample time to resolve them prior to rolling out the technologies to the field.
  • It promotes additional capabilities. The business process framework enables the utility to develop innovative ways to apply the AMI technology and ensures that future state processes are aligned to business outcomes.
  • It puts the focus on customers. A thorough business process effort ensures that the necessary processes and functional groups are put in place to empower and inform the utility customer.

The challenges of this approach include the following:

  • It entails a complex transition. The utility must manage the complexities and ambiguities of shifting from functional-based operations to process-based management and decision making.
  • It can lead to high expectations. The utility must also manage stakeholder expectations and be clear that change will be slow and painful. Revolutionary change is made through evolutionary steps – meaning that utilities cannot expect to take very large steps at any point in the process.
  • There may be technological limitations. Throughout the business process design effort, utilities will identify new ways to improve customer satisfaction through the use of AMI technology. The standard technology, however, may not always support these visions; thus, utilities must be prepared to work with vendors to support the new processes.

Although execution of future state business process design undoubtedly requires a high degree of effort, a successful operational transformation is necessary to truly leverage the features of AMI technology. If utilities expect to achieve broad-reaching benefits, they must put in place the operational and organization structures to support the transformational initiatives. Utilities cannot afford to think of AMI as a standard technology implementation or to jump immediately to the definition of system and technology requirements. This approach will inevitably limit the impact of AMI solutions and leave utilities implementing cutting-edge technology with fragmented processes and inflexible, functionally based organizational structures.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.