Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.

CATEGORIES AND RELATIONSHIPS

There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.

AVAILABLE TOOLS

Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.

THE GOAL OF IMPLEMENTING ANALYTICS

Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.

THE IMPORTANCE OF STANDARDS

The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.

CONCLUSION

To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.

Improving Call Center Performance Through Process Enhancements

The great American philosopher Yogi Berra once said, “If you don’t know where you’re going, chances are you will end up somewhere else.” Yet many utilities possess only a limited understanding of their call center operations, which can prevent them from reaching the ultimate goal: improving performance and customer satisfaction, and reducing costs.

Utilities face three key barriers in seeking to improve their call center operations:

  • Call centers routinely collect data on “average” performance, such as average handle time, average speed of answer and average hold time, without delving into the details behind the averages. The risk is that instances of poor and exemplary performance alike are not revealed by such averages.
  • Call centers typically perform quality reviews on less than one-half percent of calls received. Poor performance by individual employees – and perhaps the overall call center – can thus be masked by insufficient data.
  • Calls centers often fail to periodically review their processes. When they do, they frequently lack statistically valid data to perform the reviews. Without detailed knowledge of call center processes, utilities are unlikely to recognize and correct problems.

There are, however, proven methods for overcoming these problems. We advocate a three-step process designed to achieve more effective and efficient call center operations: collect sufficient data; analyze the data; and review and monitor progress on an ongoing basis.

STEP 1: COLLECT SUFFICIENT DATA

The ideal sampling size is 1,000 randomly selected calls. This size call sample typically provides results that are accurate +/- 3 percent, with a more than 90 percent degree of confidence. These are typical levels of accuracy and confidence that businesses require before they are likely to undertake action.

The types of data that should be collected from each call include:

  • Call type, such as new service, emergency, bill payment or high bill, and subcall type.
  • Number of systems and/or screens used – for example, how many screens did it take to complete a new service request?
  • Actions taken during the call, such as greeting the customer, gathering customer- identity data, understanding the problem or delivering the solution.
  • Actions taken after the call – for example, entering data into computer systems, or sending notes or Emails to the customer or contact center colleagues.

Having the right tool can greatly facilitate data collection. For example, the call center data collection tool pictured in Figure 1 captures this information quickly and easily, using three push-button timers that enable accurate data collection.

When a call is being reviewed, the analyst pushes the green buttons to indicate which of 12 different steps within a call sequence is occurring. The steps include greeting, hold and transfer, among others. Similarly, the yellow buttons enable the analyst to collect the time elapsed for each of 15 different screens that may be used and up to 15 actions taken after the call is finished.

This analysis resembles a traditional “time and motion” study, because in many ways it is just that. But the difference here is that we can use new automated tools, such as the voice and screen capture tools and data collector shown, as well as new approaches, to gain new insights.

The data capture tool also enables the analyst to collect up to 100 additional pieces of data, including the “secondary and tertiary call type.” (As an example, a credit call may be the primary call type, a budget billing the secondary call type and a customer in arrears the tertiary call type.) The tool also lets the analyst use drop-down boxes to quickly collect data on transfers, hold time, mistakes made and opportunities noted.

Moreover, this process can be executed quickly. In our experience, it takes four trained employees five days to gather data on 1,000 calls.

STEP 2: ANALYZE THE DATA

Having collected this large amount of data, how do you use the information to reduce costs and improve customer and employee satisfaction? Again, having the right tool enables analysts to easily generate statistics and graphs from the collected data. Figure 2 shows the type of report that can be generated based on the recommended data collection.

The analytic value of Figure 2 is that it addresses the fact that most call center reports focus on “averages” and thus fail to reveal other important details. Figure 2 shows the 1,000 calls by call-handle time. Note that the “average” call took 4.65 minutes; however, many calls took a minute or less, and a disturbingly large number of calls took well over 11 minutes.

Using the captured data, utilities can then analyze what causes problem calls. In this example, we analyzed 5 percent of the calls (49 in total) and identified several problems:

  • Customer service representatives (CSRs) were taking calls for which they were inadequately trained, causing high hold times and inordinately large screen usage numbers.
  • IT systems were slow on one particular call type.
  • There were no procedures in place to intercede when an employee took more than a specified number of minutes to complete a call.
  • Procedures were laborious, due to Public Utilities Commission (PUC) regulations or – more likely – internally mandated rules.

This kind of analysis, which we describe as a “longest call” review, typically helps identify problems that can be resolved at minimal cost. In fact, our experience in utility and other call centers confirms that this kind of analysis often allows companies to cut call-handle time by 10 to 15 seconds.

It’s important to understand what 10 to 15 fewer seconds of call-handle time means to the call center – and, most importantly, to customers. For a typical utility call center with 200 or more CSRs, the shorter handle time can result in a 5 percent cost reduction, or roughly $1 million annually. Companies that can comprehend the economic value and customer satisfaction associated with reducing average handle time, even by one second, are likely to be better focused on solving problems and prioritizing solutions.

Surprisingly, the longest 5 percent of calls typically represent nearly 15 percent of the total call center handle time, representing a mother lode of opportunity for improvement.

Another important benefit that can result from this detailed examination of call center sampling data involves looking at hold time. A sample hold time analysis graph is pictured in Figure 3.

Excessive hold times tend to be caused by bad call routing, lengthy notes on file, unclear processes and customer issues. Each of these problems has a solution, usually low-cost and easily implemented. Most importantly, the value of each action is quantified and understood, based on the data collected.

Other useful questions to ask include:

  • What are the details behind the high average after-call work (ACW) time? How does this affect your call center costs?
  • How would it help budget discussions with IT if you knew the impact of such things as inefficient call routing, poor integrated voice response (IVR) scripts or low screen pop percentages?
  • What analyses can you perform to understand how you should improve training courses and focus your quality review efforts?

The output of these analyses can prove invaluable in budget discussions and in prioritizing improvement efforts, and is also useful in communicating proposals to senior management, CSRs, quality review staff, customers and external organizations. The data can also be the starting point for a Six Sigma review.

Utilities can frequently achieve a 20 percent cost reduction by collecting the right data and analyzing it at a sufficiently granular level. Following is a breakdown of the potential savings:

  • Three percent savings can be achieved by reducing longest calls by 10 seconds.
  • Five percent savings can be gained by reducing ACW by 15 seconds.
  • Five percent savings can be realized by improving call routing – usually by aligning CSR skills required with CSR skills available – by 15 seconds.
  • Three percent savings can be achieved by improving process for two frequent processes by 10 seconds each.
  • Three percent savings can be realized by improving IVR and screen pop frequency and quality of information by 10 seconds.
  • One percent savings can be gained by improving IT response time on selected screens by three seconds.

STEP 3: REVIEW AND MONITOR PROGRESS ON AN ONGOING BASIS

Although this white paper focuses on the data collection and analyses procedures used, the key difference in this approach is the optimization strategy behind it.

The two-step approach outlined above starts with utilities recognizing that improvement opportunities exist, understanding the value of detailed data in identifying these opportunities and enabling the data collected to be easily presented and reviewed. Taken as a whole, this process can produce prioritized, high-ROI recommendations.

To gain the full value of this approach, utilities should do the following:

  • Engage the quality review team, trainers, supervisors and CSRs in the review process;
  • Expand the focus of the quality review team from looking only at individual CSRs’ performance to looking at organizational processes as well;
  • Have trainers embed the new lessons learned in training classes;
  • Encourage supervisors to reinforce lessons learned in team meetings and one-on-one coaching; and
  • Require CSRs to identify issues that can be studied in future reviews and follow the lessons learned.

Leading organizations perform these reviews periodically, building on their understanding of their call centers’ current status and using that understanding to formulate actions for future improvement.

Once the first study is complete, utilities also have a benchmark to which results from future studies can be compared. The value of having these prior analyses should be obvious in each succeeding review, as hold times decline, average handle times decrease, calls are routed more frequently to the properly skilled person and IT investments made based on ROI analyses begin to yield benefits.

Beyond these savings, customer and employee satisfaction should increase. When a call is routed to the CSR with the requisite skills needed to handle it, both the customer and the CSR are happier. Customer and CSR frustration will also be reduced when there are clear procedures to escalate calls, and IT systems fail less frequently.

IMPLEMENTING A CALL CENTER REVIEW

Although there are some commonalities in improving utilities’ call center performance, there are always unique findings specific to a given call center that help define the nature and volume of opportunities, as well as help chart the path to improvement.

By realizing that benefit opportunities exist and applying the process steps described above, and by using appropriate tools to reduce costs and improve customer and CSR satisfaction, utilities have the opportunity to transform the effectiveness of their call centers.

Perhaps we should end with another quote from Yogi: “The future ain’t what it used to be.” In fact, for utilities that implement these steps, the future will likely be much better.

Wind Energy: Balancing the Demand

In recent years, exponential demand for new U.S. wind energy-generating facilities has nearly doubled America’s installed wind generation. By the end of 2007, our nation’s total wind capacity stood at more than 16,000 megawatts (MW) – enough to power more than 4.5 million average American homes each year. And in 2007 alone, America’s new wind capacity grew 45 percent over the previous year – a record 5,244 MW of new projects and more new generating capacity than any other single electricity resource contributed in the same year. At the same time, wind-related employment nearly doubled in the United States during 2007, totaling 20,000 jobs. At more than $9 billion in cumulative investment, wind also pumped new life into regional economies hard hit by the recent economic downturn. [1]

The rapid development of wind installations in the United States comes in response to record-breaking demand driven by a confluence of factors: overwhelming consumer demand for clean, renewable energy; skyrocketing oil prices; power costs that compete with natural gas-fired power plants; and state legislatures that are competing to lure new jobs and wind power developments to their states. Despite these favorable conditions, the wind energy industry has been unable to meet America’s true demand for new wind energy-generating facilities. The barriers include the following: availability of key materials, the ability to manufacture large key components and the accessibility of skilled factory workers.

With the proper policies and related investments in infrastructure and workforce development, the United States stands to become a powerhouse exporter of wind power equipment, a wind technology innovator and a wind-related job creation engine. Escalating demand for wind energy is spurred by wind’s competitive cost against rising fossil fuel prices and mounting concerns over the environment, climate change and energy security.

Meanwhile, market trends and projections point to strong, continued demand for wind well into the future. Over the past decade, a similar surge in wind energy demand has taken place in the European Union (E.U.) countries. Wind power capacity there currently totals more than 50,000 MW, with projections that wind could provide at least 15 percent of the E.U.’s electricity by 2020 – amounting to an installed wind capacity of 180,000 MW and an estimated workforce of more than 200,000 people in wind power manufacturing, installation and maintenance jobs.

How is it, then, that European countries were able to secure the necessary parts and people while the United States fell short in its efforts on these fronts? After all, America has a bigger land mass and a larger, more high-quality wind resource than the E.U. countries. Indeed, the United States is already home to the world’s largest wind farms, including the 735-MW Horse Hollow Wind Energy Center in Texas, which generates power for about 230,000 average homes each year. What’s more, this country also has an extensive manufacturing base, a skilled labor pool and a pressing need to address energy and climate challenges.

So what’s missing? In short, robust national policy support – a prerequisite for strong, long-term investment in the sector. Such support would enable the industry to secure long lead-time materials and sufficient ramp-up to train and employ workers to continue wind power’s surging growth. Thus, the United States must rise to the occasion and assemble several key, interrelated puzzle pieces – policy, parts and people – if it’s to tap the full potential of wind energy.

POLICY: LONG-TERM SUPPORT AND INVESTMENT

In the United States, the federal government has played a key role in funding research and development, commercialization and large-scale deployment of most of the energy sources we rely on today. The oil and natural gas industry has enjoyed permanent subsidies and tax credits that date back to 1916 when Congress created the first tax breaks for oil and gas production. The coal industry began receiving similar support in 1932 with the passage of the first depletion allowances that enabled mining companies to deduct the value of coal removed from a mine from their taxable revenue.

Still in effect today, these incentives were designed to spur exploration and extraction of oil, gas and coal, and have since evolved to include such diverse mechanisms as royalty relief for resources developed on public lands; accelerated depreciation for investments in projects like pipelines, drilling rigs and refineries; and ongoing support for technology R&D and commercialization, such as the Department of Energy’s now defunct FutureGen program for coal research, its Deep Trek program for natural gas development and the VortexFlow SX tool for low-producing oil and gas wells.

For example, the 2005 energy bill passed by Congress provided more than $2 billion in tax relief for the oil and gas industry to encourage investment in exploration and distribution infrastructure. [2] The same bill also provided an expansion of existing support for coal, which in 2003 had a 10-year value of more than $3 billion. Similarly, the nuclear industry receives extensive support for R&D – the 2008 federal budget calls for more than $500 million in support for nuclear research – as well as federal indemnity that helps lower its insurance premiums. [3]

Over the past 15 years, the wind power industry has also enjoyed federal support, with a small amount of funding for R&D (the federal FY 2006 budget allotted $38 million for wind research) and the bulk of federal support taking the form of the Production Tax Credit (PTC) for wind power generation. The PTC has helped make wind energy more cost-competitive with other federally subsidized energy sources; just as importantly, its relatively routine renewal by Congress has created conditions under which market participants have grown accustomed to its effect on wind power finance.

However, in contrast to its consistent policies for coal, natural gas and nuclear power, Congress has never granted longterm approval to the wind power PTC. For more than a decade, in fact, Congress has failed to extend the PTC for longer than two years. And in three different years, the credit was allowed to expire with substantial negative consequences for the industry. Each year that the PTC has expired, major suppliers have had to, in the words of one senior wind power executive, “shut down their factories, lay off their people and go home.”

In 2000, 2002 and 2004, the expiration of the PTC sent wind development plummeting, with an almost complete collapse of the industry in 2000. If the PTC is allowed to expire at the end of 2008, American Wind Energy Associates (AWEA) estimates that as many as 75,000 domestic jobs could be lost as the industry slows production of turbines and power consumers reduce demand for new wind power projects.

The last three years have seen tenuous progress, with Congress extending the PTC for one and then two years; however, the wind industry is understandably concerned about these short-term extensions. Of significant importance is the corresponding effect a long-term or permanent extension of the PTC would have on the U.S. manufacturing sector and related investment activity. For starters, it would put the industry on an even footing with its competitors in the fossil fuels and nuclear industries. More importantly, it would send a clear signal to the U.S. manufacturing community that wind power is a solid, long-term investment.

PARTS: UNLEASHING THE NEXT MANUFACTURING BOOM

To fully grasp the trickle-down effects of an uncertain PTC on the wind power and related manufacturing industries, one must understand the industrial scale of a typical wind power development. Today’s wind turbines represent the largest rotating machinery in the world: a modern-day, 1.5-megawatt machine towers more than 300 feet above the ground with blades that out-span the wings of a 747 jetliner, and a typical utility-scale wind farm will include anywhere from 30 to 200 of these machines, planted in rows or staggered lines across the landscape.

The sheer size and scope of a utility-scale wind farm demands a sophisticated and established network of heavy equipment and parts manufacturers can fulfill orders in a timely fashion. Representing a familiar process for anyone who’s worked in a steel mill, forgery, gear-works or similar industrial facility, the manufacture of each turbine requires massive, rolled steel tubes for the tower; a variety of bearings and related components for lubricity in the drive shaft and hub; cast steel for housings and superstructure; steel forgings for shafts and gears; gearboxes for torque transmission; molded fiberglass, carbon fiber or hybrid blades; and electronic components for controls, monitoring and other functions.

U.S. manufacturers have extensive experience making all of these components for other end-use applications, and many have even succeeded in becoming suppliers to the wind industry. For example, Ameron International – a Pasadena, Calif.-based maker of industrial steel pipes, poles and related coatings – converted an aging heavy-steel fabrication plant in Fontana, Calif., to make wind towers. At 80 meters tall, 4.8 meters in diameter and weighing in at 200 tons, a wind tower requires large production facilities that have high up-front capital costs. By converting an existing facility, Ameron was able to capture a key and rapidly growing segment of the U.S. wind market in high-wind Western states while maintaining its position in other markets for its steel products.

Other manufacturers have also seen the opportunity that wind development presents and have taken similar steps. For example, Beaird Co. Ltd, a Shreveport, La.-based metal fabrication and machined parts manufacturer, supplies towers to the Midwest, Texas and Florida wind markets, as does DMI Industries from facilities in Fargo, N.D., and Tulsa, Okla.

But the successful conversion of existing manufacturing facilities to make parts for the wind industry belies an underlying challenge: investment in new manufacturing capacity to serve the wind industry is hindered by the lack of a clear policy framework. Even at wind’s current growth rates and with the resulting pent-up domestic demand for parts, the U.S. manufacturing sector is understandably reticent to invest in new production capacity.

The cause for this reticence is depicted graphically in Figure 1. With the stop-and-go nature of the PTC regarding U.S. wind development, and the consistent demand for their products in other end-use sectors, American manufacturers have strong disincentives to invest in new capital projects targeting the wind industry. It can take two to six years to build a new factory and 15 or more years to recapture the investment. The one- to two-year investment cycle of the U.S. wind industry is therefore only attractive to players who are comfortable with the risk and can manage wind as a marginal customer rather than an anchor tenant. This means that over the long haul, the United States could be legislating itself out of the “renewables” space, which arguably has a potential of several trillion dollars of global infrastructure.

The result in the marketplace: the United States ends up importing many of the large manufactured parts that go into a modern wind turbine – translating to a missed opportunity for domestic manufacturers that could be claiming a larger chunk of the underdeveloped U.S. wind market. As the largest consumer of electricity on earth, the United States also represents the biggest untapped market for wind power. At the end of 2007, with multiple successive years of 30 to 40 percent growth, wind power claimed just 1 percent of the U.S. electricity market. The raw potential for wind power in the United States is three times our total domestic consumption, according to the U.S. Energy Information Administration; if supply chain issues weren’t a problem, wind power could feasibly grow to supply as much as 20 to 30 percent of our $330 billion annual domestic electricity market. At 20 percent of domestic energy supply, the United States would need 300,000 MW of installed wind power capacity – an amount that would take 20 to 30 years of sustained manufacturing and development to achieve. But that would require growth well above our current pace of 4,000 to 5,000 MW annually – growth that simply isn’t possible given current supply constraints.

Of course, that’s just the U.S. market. Global wind development is set to more than triple by 2015, with cumulative installed capacity expected to rise from approximately 91 gigawatts (GW) by the end of 2007 to more than 290 GW by the end of 2015, according to forecasts by Emerging Energy Research (EER). Annual MW added for global wind power is expected to increase more than 50 percent, from approximately 17.5 GW in 2007 to more than 30 GW in 2015, according to EER’s forecasts. [4]

By offering the wind power industry the same long-term tax benefits enjoyed by other energy sources, Congress could trigger a wave of capital investment in new manufacturing capacity and turn the United States from a net importer of wind power equipment to a net exporter. But extending the PTC is not the final step: as much as any other component, a robust wind manufacturing sector needs skilled and dedicated people.

PEOPLE: RECLAIMING OUR MANUFACTURING ROOTS

In 2003, the National Association of Manufacturers released a study outlining many of the challenges facing our domestic manufacturing base. “Keeping America Competitive – how a Talent Shortage Threatens U.S. Manufacturing” highlights the loss of skilled manufacturing workers to foreign competitors, the problem of an aging workforce and a shift to a more urban, high tech economy and culture.

In particular, the study notes a number of “image” problems for the manufacturing industry. To wit: Among a geographically, ethnically and socio-economically diverse set of respondents – ranging from students, parents and teachers to policy analysts, public officials, union leaders, and manufacturing employees and executives – the sector’s image was found to be heavily loaded with negative connotations (and universally tied to the old “assembly line” stereotype) and perceived to be in a state of decline.

When asked to describe the images associated with a career in manufacturing, student respondents offered phrases such as “serving a life sentence,” being “on a chain gang” or a “slave to the line,” and even being a “robot.” Even more telling, most adult respondents said that people “just have no idea” of manufacturing’s contribution to the American economy.

The effect of this “sector fatigue” can be seen across the Rust Belt in the aging factories, retiring workforce and depressed communities being heavily impacted by America’s turn away from manufacturing. Wind power may be uniquely positioned to help reverse this trend. A growing number of America’s young people are concerned about environmental issues, such as pollution and global warming, and want to play a role in solving these problems. With the lure of good-paying jobs in an industry committed to environmental quality and poised for tremendous growth, wind power may provide an answer to manufacturers looking to lure and retain top talent.

We’ve already seen that you don’t need a large wind power resource in your state to enjoy the economic benefits of wind’s surging growth: whether it’s rolled steel from Louisiana and Oklahoma, gear boxes and cables from Wisconsin and New Hampshire, electronic components from Massachusetts and Vermont, or substations and blades from Ohio and Florida, the wind industry’s needs for manufactured parts – and the skilled labor that makes them – is massive, distributed and growing by the day.

UNLEASHING THE POWER OF EVOLUTION

The wind power industry offers a unique opportunity for revitalizing America’s manufacturing sector, creating vibrant job growth in currently depressed regions and tapping new export markets for American- made parts. For utilities and energy consumers, wind power provides a hedge against volatile energy costs and harvests one of our most abundant natural resources for energy security.

The time for wind power is now. As mankind has evolved, so too have our primary sources of energy: from the burning of wood and animal dung to whale oil and coal; to petroleum, natural gas and nuclear fuels; and (now) to wind turbines. The shift to wind power represents a natural evolution and progression that will provide both the United States and the world with critical economic, environmental and technological solutions. As energy technologies continue to evolve and mature, wind power will soon be joined by solar power, ocean current power and even hydrogen as cost-competitive solutions to our pressing energy challenges.

ENDNOTES

  1. “American Wind Energy Association 2007 Market Report” (January 2008). www.awea.org/Market_Report_Jan08.pdf
  2. Energy Policy Act of 2005, Section 1323-1329. www.citizen.org/documents/energyconferencebill0705.pdf
  3. Aileen Roder, “An Overview of Senate Energy Bill Subsidies to the Fossil Fuel Industry” (2003), Taxpayers for Common Sense website. www.taxpayer.net/greenscissors/LearnMore/senatefossilfuelsubsidies.htm
  4. “Report: global Wind Power Base Expected to Triple by 2015” (November 2007), North American Windpower. www.nawindpower.com/naw/e107_plugins/content/content_lt.php?content.1478

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

Making Change Work: Why Utilities Need Change Management

Many times organizations are reluctant to engage change management programs, plans and teams. More often, change management programs are launched too late in the project process, are only moderately funded or are absorbed within the team as part-time responsibilities – all of which we’ve seen happen time and again in the utility industry.

“Making Change Work,” an IBM study done in collaboration with the Center of Evaluation and Methods at Bonn University, analyzed the factors for successful implementation of change. The scope of this study, released in 2007, is now being expanded because the project management and change management professions, formerly aligned, are now at a turning point of differentiation. The reason is simple: too many projects fail to consider both components as critical to success – and therefore lack insight into the day-today impact of a change on members of the organization.

Despite this, many organizations have been reluctant to implement change management programs, plans and teams. And when they have put such programs in place, the programs tend to be launched too late in the project process, are inadequately funded or are perceived as part-time tasks that can be assigned to members of the project management team.

WHAT IS CHANGE MANAGEMENT?

Change management is a structured approach to business transformation that manages the transition from a current state to a desired future state. Far from being static or rigid, change management is an ever-evolving program that varies with the needs of the organization. Effective change management involves people and provides open communication.

Change management is as important as project management. However, whereas project management is a tactical activity, change management represents a strategic initiative. To understand the difference, consider the following

  • Change management is the process of driving corporate strategy by identifying, addressing and managing barriers to change across the organization or enterprise.
  • Project management is the process of implementing the tools needed to enable or mobilize the corporate strategy.

Change management is an ongoing process that works in close concert with project management. At any given time at least one phase of change management should be occurring. More likely, multiple phases will be taking place across various initiatives.

A change management program can be tailored to manage the needs of the organizational culture and relationships. The program must close the gaps among workforce, project team and sponsor leadership during all phases of all projects. It does this by:

  • Ensuring proper alignment of the organization with new technology and process requirements;
  • Preparing people for new processes and technology through training and communication;
  • Identifying and addressing human resource implications such as job definitions, union negotiations and performance measures;
  • Managing the reaction of both individuals and the entire organization to change; and
  • Providing the right level of support for ongoing implementation success.

The three fundamental activities of a change management program are leading, communicating and engaging. These three activities should span the project life cycle to maintain both awareness of the change and its momentum (Figure 1).

KEY ELEMENTS OF A CHANGE PROGRAM

There are three best practice elements that make the difference between successful projects and less successful projects: [1]

Organizational awareness for the challenges inherent in any change. This involves the following:

  • Getting a real understanding of – and leadership buy-in to – the stakeholders and culture;
  • Recognizing the interdependence of strategy and execution;
  • Ensuring an integrated strategy approach linking business strategy, operations, organization design and change and technology strategy; and
  • Educating leadership on change requirements and commitment.

Consistent use of formal methods for change management. This should include:

  • Covering the complete life cycle – from definition to deployment to post-implementation optimization;
  • Allowing for easy customization and flexibility through a modular design;
  • Incorporating change management and value realization components into each phase to increase the likelihood of success; and
  • Providing a published plan with ongoing accountability and sponsorship as well as continuous improvement.

A specified share of the project budget that is invested in change management. This should involve:

  • Investing in change linked to project success. Projects that invest more than 10 percent of the project budget have an average of 45 percent success (Figure 2). [2]
  • Assigning the right resources to support change management early on and maintaining the required support. This also limits the adverse impacts of change on an organization’s productivity (Figure 3). [3]

WHY DO UTILITIES NEED CHANGE MANAGEMENT?

Utilities today face a unique set of challenges. For starters, they’re simultaneously dealing with aging infrastructures and aging workforces. In addition, there are market pressures to improve performance, become more “green” and mitigate rising energy costs. To address these realities, many utilities are seeking mergers and acquisition (M&A) opportunities as well as implementing new technologies.

The cost cutting of the past decade combined with M&As has left utilities with gaps in workforce experience as well as budget challenges. Yet utilities are facing major business disruptions going into the next decade and beyond. To cope with these disruptions, companies are implementing new technologies such as the intelligent grid, advanced metering infrastructure (AMI), meter data management (MDM), enterprise asset management (EAM) and work management systems (WMS’s). It’s not uncommon for utilities to be implementing multiple new systems simultaneously that affect the day-to-day activities of people throughout the organization, from frontline workers to senior managers.

A change management program can address a number of challenges specific to the utilities industry.

CULTURAL CLIMATE: ‘BUT WE’RE DIFFERENT’

A utility is a utility is a utility. But a deeper look into individual businesses reveals nuances in their relationships with both internal and external stakeholders that are unique to each company. A change management team must intimately understand these relationships. For example, externally how is the utility perceived by regulators, customers, the community and even analysts? As for internal relationships, how do various operating divisions relate and work together? Some operating divisions work well together on project teams and respect each other and their differences; others do not.

There may be cultural differences, but work is work. Only change management can address these relationships. Knowing the utility’s cultural climate and relationships will help shape each phase of the change management program, and allow change management professionals to customize a project or system implementation to fit a company’s culture.

REGULATORY LANDSCAPE

With M&As and increasing market pressures across the United States, the regulatory landscape confronting utilities is becoming more variable. We’ve seen several types of regulatory-related challenges.

Regulatory pressure. Whether regulators mandate or simply encourage new technology implementations can make a significant difference in how stakeholders in a project behave. In general, there’s more resistance to a new technology when it’s required versus voluntarily implemented. Change management can help work through participant behaviors and mitigate obstacles so that project work can continue as planned.

Multiple regulatory jurisdictions. Many utilities with recently expanded footprints following M&As now have to manage requests from and expectations of multiple regulatory commissions. Often these commissions have different mandates. Change management initiatives are needed to work through the complexity of expectations, manage multiple regulatory relationships and drive utilities toward a unified corporate strategy.

Regulatory evolution. Just as markets evolve, so do regulatory influences and mandates. Often regulators will issue orders that can be interpreted in many ways. They may even do this to get information in the form of reactions from their various constituents. Whatever the reason, the reality is that utilities are managing an ever-changing portfolio of regulations. Change management can better prepare utilities for this constant change.

OPERATIONS MATURITY

When new systems and technologies being implemented encompass multiple operating divisions, it can be difficult for stakeholders to agree on operating standards or processes. Project team members representing the various operating regions can resist compromise for fear of losing control. This often occurs when utilities are attempting to integrate systems across operating regions following an acquisition.

Change management helps ensure that various constituents – for example, the regional operating divisions – are prepared for eminent business transformation. In large organizations, this preparation period can take a year or more. But for organizations to realize the benefits of new systems and technology implementations, they must be ready to receive the benefits. Readiness and preparedness are largely the responsibilities of the change management team.

ORGANIZATIONAL COHESIVENESS

The notion of organizational cohesiveness is that across the organization all constituents are equally committed to the business transformation initiative and have the same understanding of the overarching corporate strategy while also performing their individual roles and responsibilities.

Senior executives must align their visions and common commitment to change. After all, they set the tone for change through their respective organizations. If they are not in sync with each other, their organizations become silos, and business processes are less likely to be fluid across organizational boundaries. Frontline managers and associates must, in turn, be engaged and enthusiastic about the transformations to come.

Organizational cohesiveness is especially critical during large systems implementations involving utility field operations. Leaders at multiple locations must be ready to communicate and support change – and this support must be visible to the workforce. Utilities must understand this requirement at the beginning of a project to make change manageable, realistic and personal enough to sustain momentum. All too often, we’ve heard team members comment, “We had a lot of leadership at the project kickoff, but we really haven’t seen leadership at any of our activities or work locations since then. The project team tells us what to do.”

Moreover, leadership – when removed from the project – usually will not admit that they’re in the dark about what’s going on. Yet their lack of involvement will not escape the attention of frontline employees. Once the supervisor is perceived as lacking information – and therefore power – it’s all over. Improving customer service and quality, cutting costs and adopting new technology-merging operations all require changing employees. [4]

For utilities, the concept of organizational cohesiveness is especially important because just as much technology “lives” outside IT as inside. Yet the engineers who use this non-IT-controlled technology – what Gartner calls “operations technology” – are usually disconnected from the IT world in terms of both practical planning and execution. However, these worlds must act as one for a company to be truly agile. [5]

Change management methods and tools ensure that organization cohesiveness exists through project implementation and beyond.

UNION ENGAGEMENT

Successful change occurs with a sustained partnership among union representatives throughout the project life cycle. Project leadership and union leadership must work together and partner to implement change. Union representation should be on the project team. Representatives can be involved in process reviews, testing and training, or asked to serve as change champions. In addition, communication is critical throughout all phases of a project. Frontline employees must see real evidence of how this change will benefit them. Change is personal: everyone wants to know how his or her job will be impacted.

There should also be union representation in training activities, since workers tend to be more receptive to peer-to-peer support. Utilities should, for example, engage union change champions to help co-workers during training and to be site “go to” representatives. Utilities should also provide advance training and recognize all who participate in it.

Union representatives should also participate in design and/or testing, since they will be able to pinpoint issues that will impact routine daily tasks. It could be something as simple as changing screen labels per their recommendation to increase user understanding.

More than one union workforce may be involved in a project. Location cultures that exist in large service territories or that have resulted from mergers may try to isolate themselves from the project team and resist change. Utilities should assemble a team from various work groups and then do the following to address the history and differences in the workforce:

  • Request ongoing union participation throughout the life of the project.
  • Include union roles as part of the project charter and define these roles with union leadership.
  • Provide a kickoff overview to union leadership.
  • Include union representation in work process development with balanced representation from various areas. Union employees know the job and can quickly identify the pros and cons of work tasks. A structured facilitation process and issue resolution process is required.
  • Assign a corporate human resource or labor relations role to review processes that impact the union workforce.
  • Develop communication campaigns that address union concerns, such as conducting face-to-face presentations at employing locations and educating union leaders prior to each change rollout.
  • Involve union representatives in training and user support.

Change management is necessary to sort through the relationships of multiple union workforces so that projects and systems can be implemented.

AN AGING WORKFORCE

A successful change management program will help mitigate the aging workforce challenges utilities will be facing for many years to come.

WHAT TO EXPECT FROM A SUCCESSFUL CHANGE MANAGEMENT PROGRAM

The result of a successful change management program is a flexible organization that’s responsive to customer needs, regulatory mandates and market pressures, and readily embraces new technologies and systems. A change-ready organization anticipates, expects and is increasingly comfortable with change and exhibits the following characteristics:

  • The organization is aligned.
  • The leaders are committed.
  • Business processes are developed and defined across all operational units.
  • Associates at all levels have received communications and have continued access to resources.

Facing major business transformations and unique industry challenges, utilities cannot afford not to engage change management programs. This skill set is just as critical as any other role in your organization. Change is a cost. Change should be part of the project budget.

Change is an ongoing, long-term investment. Good change management designed specifically for your culture and challenges minimizes change’s adverse effect on daily productivity and helps you reach and sustain project goals.

ENDNOTES

  1. “Making Change Work” (an IBM study), Center of Evaluation and Methods, Bonn University, 2007; excerpts from “IBM Integrated Strategy and Change Methodology,” 2007.
  2. “Making Change Work,” Center of Evaluation and Methods, Bonn University, 2007.
  3. Ibid.
  4. T.J. Larkin and Sandar Larkin, “Communicating Change: Winning Employee Support for New Business Goals,” McGraw Hill, 1994, p. 31.
  5. K. Steenstrup, B. Williams, Z. Sumic, C. Moore; “Gartner’s Energy and Utilities Summit: Agility on Both Sides of the Divide”; Gartner Industry Research ID Number G00145388; Jan. 30, 2007; p. 2.
  6. P. R. Bruffy and J. Juliano, “Addressing the Aging Utility Workforce Challenge: ACT NOW,” Montgomery Research 2006 journal.

Growing (or Shrinking) Trends in Nuclear Power Plant Construction

Around the world, the prospects for nuclear power generation are increasing – opportunities made clear by the number of currently under-construction nuclear plants that are smaller than those currently in the limelight. Offering advantages in certain situations, these smaller plants can more readily serve smaller grids as well as be used for distributed generation (with power plants located close to the demand centers and the main grid providing back-up). Smaller plants are also easier to finance, particularly in countries that are still in the early days of their nuclear power programs.

In recent years, development and licensing efforts have focused primarily on large, advanced reactors, due to their economies of scale and obvious application to developed countries with substantial grid infrastructure. Meanwhile, the wide scope for smaller nuclear plants has received less attention. However, of the 30 or more countries that are moving toward implementing nuclear power programs, most are likely to be looking initially for units under 1,000 MWe, and some for units of less than half that amount.

EXISTING DESIGNS

With that in mind, let’s take a look at some of the current designs.

There are many plants under 1,000 MWe now in operation, even if their replacements tend to be larger. (In 2007 four new units were connected to the grid – two large ones, one 202-MWe unit and one 655-MWe unit.) In addition, some smaller reactors are either on offer now or likely to be available in the next few years.

Five hundred to 700 MWe. There are several plants in this size range, including Westinghouse AP600 (which has U.S. design certification) and the Canadian Candu-6 (being built in Romania). In addition, China is building two CNP-600 units at Qinshan but does not plan to build any more of them. In Japan, Hitachi-GE has completed the design of a 600-MWe version of its 1,350-MWe ABWR, which has been operating for 10 years.

Two hundred and fifty to 500 MWe. And finally, in the 250- to 500-MWe category (output that is electric rather than heat), there are a few designs pending but little immediately on offer.

IRIS. Being developed by an international team led by Westinghouse in the United States, IRIS – or, more formally, International Reactor Innovative and Secure – is an advanced third-generation modular 335-MWe pressurized water reactor (PWR) with integral steam generators and a primary coolant system all within the pressure vessel. U.S. design certification is at pre-application stage with a view to final design approval by 2012 and deployment by 2015 to 2017.

VBER-300 PWR. This 295- to 325-MWe unit from Russia was designed by OKBM based on naval power plants and is now being developed as a land-based unit with the state-owned nuclear holding company Kazatomprom, with a view to exporting it. The first two units will be built in Southwest Kazakhstan under a Russian-Kazakh joint venture.

VK-300. This Russian-built boiling water reactor is being developed for co-generation of both power and district heating or heat for desalination (150 MWe plus 1675 GJ/hr) by the nuclear research and development organization NIKIET. The unit evolved from the VK-50 BWR at Dimitrovgrad but uses standard components from larger reactors wherever possible. In September 2007, it was announced that six of these units would be built at Kola and at Primorskaya in Russia’s far east, to start operating between 2017 and 2020.

NP-300 PWR. Developed in France from submarine power plants and aimed at export markets for power, heat and desalination, this Technicatome (Areva)- designed reactor has passive safety systems and can be built for applications of from 100 to 300 MWe.

China is also building a 300-MWe PWR (pressurized water reactor) nuclear power plant in Pakistan at Chasma (alongside another that started up in 2000); however, this is an old design based on French technology and has not been offered more widely. The new unit is expected to come online in 2011.

One hundred to 300 MWe. This category includes both conventional PWR and high-temperature gas-cooled reactors (HTRs); however, none in the second category are being built yet. Argentina’s CAREM nuclear power plant is being developed by CNEA and INVAP as a modular 27-MWe simplified PWR with integral steam generators designed to be used for electricity generation or for water desalination.

FLOATING PLANTS

After many years of promoting the idea, Russia’s state-run atomic energy corporation Rosatom has approved construction of a nuclear power plant on a 21,500-ton barge to supply 70 MWe of power plus 586 GJ/hr of heat to Severodvinsk, in the Archangelsk region of Russia. The contract to build the first unit was let by nuclear power station operator Rosenergoatom to the Sevmash shipyard in May 2006. Expected to cost $337 million (including $30 million already spent in design), the project is 80 percent financed by Rosenergoatom and 20 percent financed by Sevmash. Operation is expected to begin in mid-2010.

Rosatom is planning to construct seven additional floating nuclear power plants, each (like the initial one) with two 35- MWe OKBM KLT-40S nuclear reactors. Five of these will be used by Gazprom – the world’s biggest extractor of natural gas – for offshore oil and gas field development and for operations on Russia’s Kola and Yamal Peninsulas. One of these reactors is planned for 2012 commissioning at Pevek on the Chukotka Peninsula, and another is planned for the Kamchatka region, both in the far east of the country. Even farther east, sites being considered include Yakutia and Taimyr. Electricity cost is expected to be much lower than from present alternatives. In 2007 an agreement was signed with the Sakha Republic (Yakutia region) to build a floating plant for its northern parts, using smaller ABV reactors.

OTHER DESIGNS

On a larger scale, South Korea’s SMART is a 100-MWe PWR with integral steam generators and advanced safety features. It is designed to generate electricity and/or thermal applications such as seawater desalination. Indonesia’s national nuclear energy agency, Batan, has undertaken a pre-feasibility study for a SMART reactor for power and desalination on Madura Island. However, this awaits the building of a reference plant in Korea.

There are three high-temperature, gas-cooled reactors capable of being used for power generation, but much of the development impetus has been focused on the thermo-chemical production of hydrogen. Fuel for the first two consists of billiard ball-size pebbles that can withstand very high temperatures. These aim for a step-change in safety, economics and proliferation resistance.

China’s 200-MWe HTR-PM is based on a well-tested small prototype, and a two-module plant is due to start construction at Shidaowan in Shandong province in 2009. This reactor will use the conventional steam cycle to generate power. Start-up is scheduled for 2013. After the demonstration plant, a power station with 18 modules is envisaged.

Very similar to China’s plant is South Africa’s Pebble Bed Modular Reactor (PBMR), which is being developed by a consortium led by the utility Eskom. Production units will be 165 MWe. The PBMR will have a direct-cycle gas turbine generator driven by hot helium. The PBMR Demonstration unit is expected to start construction at Koeberg in 2009 and achieve criticality in 2013.

Both of these designs are based on earlier German reactors that have some years of operational experience. A U.S. design, the Modular helium Reactor (GT-MHR), is being developed in Russia; in its electrical application, each unit would directly drive a gas turbine giving 280 MWe.

These three designs operate at much higher temperatures than ordinary reactors and offer great potential as sources of industrial heat, including for the thermo-chemical production of hydrogen on a large scale. Much of the development thinking going into the PBMR has been geared to synthetic oil production by Sasol (South African Coal and Oil).

MODULAR CONSTRUCTION

The IRIS developers have outlined the economic case for modular construction of their design (about 330 MWe), and it’s an argument that applies similarly to other smaller units. These developers point out that IRIS, with its moderate size and simple design, is ideally suited for modular construction. The economy of scale is replaced here with the economy of serial production of many small and simple components and prefabricated sections. They expect that construction of the first IRIS unit will be completed in three years, with subsequent production taking only two years.

Site layouts have been developed with multiple single units or multiple twin units. In each case, units will be constructed with enough space around them to allow the next unit to be constructed while the previous one is operating and generating revenue. And even with this separation, the plant footprint can be very compact: a site with three IRIS single modules providing 1000 MWe is similar to or smaller in size than one with a comparable total power single unit.

Eventually, IRIS’ capital and production costs are expected to be comparable to those of larger plants. however, any small unit offers potential for a funding profile and flexibility impossible to achieve with larger plants. As one module is finished and starts producing electricity, it will generate positive cash fl ow for the construction of the next module. Westinghouse estimates that 1,000 MWe delivered by three IRIS units built at three-year intervals financed at 10 percent for 10 years requires a maximum negative cash flow of less than $700 million (compared with about three times that for a single 1,000-MWe unit). For developed countries, small modular units offer the opportunity of building as necessary; for developing countries, smaller units may represent the only option, since such country’s electric grids are likely unable to take 1,000-plus- MWe single units.

Distributed generation. The advent of reactors much smaller than those being promoted today means that reactors will be available to serve smaller grids and to be put into use for distributed generation (with power plants close to the demand centers and the main grid used for back-up). This does not mean, however, that large units serving national grids will become obsolete – as some appear to wish.

WORLD MARKET

One aspect of the global Nuclear Energy Partnership program is international deployment of appropriately sized reactors with desirable designs and operational characteristics (some of which include improved economics, greater safety margins, longer operating cycles with refueling intervals of up to three years, better proliferation resistance and sustainability). Several of the designs described earlier in this paper are likely to meet these criteria.

IRIS itself is being developed by an international team of 20 organizations from ten countries (Brazil, Croatia, Italy, Japan, Lithuania, Mexico, Russia, Spain, the United Kingdom and the United States) on four continents – a clear demonstration of how reactor development is proceeding more widely.

Major reactor designers and vendors are now typically international in character and marketing structure. To wit: the United Kingdom’s recent announcement that it would renew its nuclear power capacity was anticipated by four companies lodging applications for generic design approval – two from the United States (each with Japanese involvement), one from Canada and one from France (with German involvement). These are all big units, but in demonstrating the viability of late third-generation technology, they will also encourage consideration of smaller plants where those are most appropriate.