Infrastructure and the Economy

With utility infrastructure aging rapidly, reliability of service is threatened. Yet the economy is hurting, unemployment is accelerating, environmental mandates are rising, and the investment portfolios of both seniors and soon-to-retire boomers have fallen dramatically. Everyone agrees change is needed. The question is: how?

In every one of these respects, state regulators have the power to effect change. In fact, the policy-setting authority of the states is not only an essential complement to federal energy policy, it is a critical building block for economic recovery.

There is no question we need infrastructure development. Almost 26 percent of the distribution infrastructure owned and operated by the electric industry is at or past the end of its service life. For transmission, the number is approximately 15 percent, and for generation, about 23 percent. And that’s before considering the rising demand for electricity needed to drive our digital economy.

The new administration plans to spend hundreds of billions of dollars on infrastructure projects. However, most of the money will go towards roads, transportation, water projects and waste water systems, with lesser amounts designated for renewable energy. It appears that only a small portion of the funds will be designated for traditional central station generation, transmission and distribution. And where such funds are available, they appear to be in the form of loan guarantees, especially in the transmission sector.

The U.S. transmission system is in need of between $50 billion and $100 billion of new investment over the next 10 years, and approximately $300 billion by 2030. These investments are required to connect renewable energy sources, make the grid smarter, improve electricity market efficiency, reduce transmission-related energy losses, and replace assets that are too old. In the next three years alone, the investor-owned utility sector will need to spend about $30 billion on transmission lines.

Spending on distribution over the next decade could approximate $200 billion, rising to $600 billion by 2030. About $60 billion to $70 billion of this will be spent in just the next three years.

The need for investment in new generating stations is a bit more difficult to estimate, owing to the uncertainties surrounding the technologies that will prove the most economic under future greenhouse gas regulations and other technology preferences of the Congress and administration. However, it could easily be somewhere between $600 billion and $900 billion by 2030. Of this amount, between $100 billion and $200 billion could be invested over the next three years and as much as $300 billion over the next 10. It will be mostly later in that 10-year period, and beyond, that new nuclear and carbon-compliant coal capacity is expected to come on line in significant amounts. That will raise generating plant investments dramatically.

Jobs, and the Job of Regulators

All of this construction would maintain or create a significant number of jobs. We estimate that somewhere between 150,000 and 300,000 jobs could be created annually by this build out, including jobs related to construction, post-construction utility operating positions, and general economic "ripple effect" jobs through 2030.

These are sustainable levels of employment – jobs every year, not just one-time surges.

In addition, others have estimated that the development of the smart grid could add between 150,000 and 280,000 jobs. Clearly, then, utility generation, transmission and distribution investments can provide a substantial boost for the economy, while at the same time improving energy efficiency, interconnecting critical renewable energy sources and making the grid smarter.

The beauty is that no federal legislation, no taxpayer money and no complex government grant or loan processes are required. This is virtually all within the control of state regulators.

Timely consideration of utility permit applications and rate requests, as well as project pre-approvals by regulators, allowance of construction work in progress in rate base, and other progressive regulatory practices would vastly accelerate the pace at which these investments could be made and financed, and new jobs created. Delays in permitting and approval not only slow economic recovery, but also create financial uncertainty, potentially threatening ratings, reducing earnings and driving up capital costs.

Helping Utility Shareholders

This brings us to our next point: Regulators can and should help utility shareholders. Although they have a responsibility for controlling utility rates charged to consumers, state regulators also need to provide returns on equity and adopt capital structures that recognize the risks, uncertainties and investor expectations that utilities face in today’s and tomorrow’s very de-leveraged and uncertain financial markets.

It is now widely acknowledged that risk has not been properly priced in the recent past. As with virtually all other industries, equity will play a far more critical role in utility project and corporate finance than in the past. For utilities to attract the equity needed for the buildout just described, equity must earn its full, risk-adjusted return. This requires a fresh look at stockholder expectations and requirements.

A typical utility stockholder is not some abstract, occasionally demonized, capitalist, but rather a composite of state, city, corporate and other pension funds, educational savings accounts, individual retirement accounts and individual shareholders who are in, or close to, retirement. These shares are held largely by, or for the benefit of, everyday workers of all types, both employed and retired: government employees, first responders, trades and health care workers, teachers, professionals, and other blue and white collar workers throughout the country.

These people live across the street from us, around the block, down the road or in the apartments above and below us. They rely on utility investments for stable income and growth to finance their children’s education, future home purchases, retirement and other important quality-of-life activities. They comprise a large segment of the population that has been injured by the economy as much as anyone else.

Fair public policy suggests that regulators be mindful of this and that they allow adequate rates of return needed for financial security. It also requires that regulatory commissions be fair and realistic about the risk premiums inherent in the cost of capital allowed in rate cases.

The cost of providing adequate returns to shareholders is not particularly high. Ironically, the passion of the debate that surrounds cost of capital determinations in a rate case is far greater than the monetary effect that any given return allowance has on an individual customer’s bill.

Typically, the differential return on equity at dispute in a rate case – perhaps between 100 and 300 basis points – represents between 0.5 and 2 percent of a customer’s bill for a "wires only" company. (The impact on the bills of a vertically integrated company would be higher.) Acceptance of the utility’s requested rate of return would no doubt have a relatively small adverse effect on customers’ bills, while making a substantial positive impact on the quality of the stockholders’ holdings. Fair, if not favorable, regulatory treatment also results in improved debt ratings and lower debt costs, which accrue to the benefit of customers through reduced rates.

The List Doesn’t Stop There

Regulators can also be helpful in addressing other challenges of the future. The lynchpin of cost-effective energy and climate change policy is energy efficiency (EE) and demand side management (DSM).

Energy efficiency is truly the low-hanging fruit, capable of providing immediate, relatively inexpensive reductions in emissions and customers bills. However, reductions in customers’ energy use runs contrary to utility financial interests, unless offset by regulatory policy that removes the disincentives. Depending upon the particulars of a given utility, these policies could include revenue decoupling and the authorization of incentive – or at least fully adequate – returns on EE, DSM and smart grid investments, as well as recovery of related expenses.

Additional considerations could include accelerated depreciation of EE and DSM investments and the approval of rate mechanisms that recover lost profit margins created by reduced sales. These policies would positively address a host of national priorities in one fell swoop: the promotion of energy efficiency, greenhouse gas reduction, infrastructure investment, technology development, increased employment and, through appropriate rate base and rate of return policy, improved stockholder returns.

The Leadership Opportunity

Oftentimes, regulatory decision making is narrowly focused on a few key issues in isolation, usually in the context of a particular utility, but sometimes on a statewide generic basis. Rarely is state regulatory policy viewed in a national context. Almost always, issues are litigated individually in high partisan fashion, with little integration as part of a larger whole where utility shareholder interests are usually underrepresented.

The time seems appropriate – and propitious – for regulators to lead the way to a major change in this paradigm while addressing the many urgent issues that face our nation. Regulators can make a difference, probably far beyond that for which they presently give themselves credit.

Power and Patience

The U.S. utility industry – particularly the electric-producing branch of it, there also are natural gas and water utilities – has found itself in a new, and very uncomfortable, position. Throughout the first quarter of 2009 it was front and center in the political arena.

Politics has been involved in the U.S. electric generation and distribution industry since its founding in the late 19th Century by Thomas Edison. Utilities have been regulated entities almost since the beginning and especially after the 1930s when the federal government began to take a much greater role in the direction and regulation of private enterprise and national economics.

What is new as we are about to enter the second decade of the 21st Century is that not only is the industry being in large part blamed for a newly discovered pollutant, carbon dioxide, which is naturally ubiquitous in the Earth’s atmosphere, but it also is being tasked with pulling the nation out of its worst economic recession since the Great Depression of the 1930s. Oh, and in your spare time, electric utilities, enable the remaking of the automobile industry, eliminate the fossil fuels which you have used to generate ubiquitous electricity for 100 years, and accomplish all this while remaining fiscally sound and providing service to all Americans. Finally, please don’t make electricity unaffordable for the majority of Americans.

It’s doubtful that very many people have ever accused politicians of being logical, but in 2009 they seem to have decided to simultaneously defy the laws of physics, gravity, time, history and economics. They want the industry to completely remake itself, going from the centralized large-plant generation model created by Edison to widely dispersed smaller-generation; from fossil fuel generation to clean “renewable” generation; from being a mostly manually controlled and maintained system to becoming a self-healing ubiquitously digitized and computer-controlled enterprise; from a marginally profitable (5-7 percent) mostly privately owned system to a massive tax collection system for the federal government.

Is all this possible? The answer likely is yes, but in the timeframe being posited, no.

Despite political co-option of the terms “intelligent utility” and “smart grid” in recent times, the electric utility industry has been working in these directions for many years. Distribution automation (DA) – being able to control the grid remotely – is nothing new. Utilities have been working on DA and SCADA (supervisory control and data acquisition) systems for more than 20 years. They also have been building out communications systems, first analog radio for dispatching service crews to far-flung territories, and in recent times, digital systems to reach all of the millions of pieces of equipment they service. The terms themselves were not invented by politicians, but by utilities themselves.

Prior to 2009, all of these concepts were under way at utilities. WE Energies has a working “pod” of all digital, self-healing, radial-designed feeders that works. The concept is being tried in Oklahoma, Canada and elsewhere. But the pods are small and still experimental. Pacific Gas and Electric, PEPCO and a few others have demonstration projects of “artificial intelligence” on the grid to automatically switch power around outages. TVA and several others have new substation-level servers that allow communications with, data collection from and monitoring of IEDs (Intelligent electrical devices) while simultaneously providing a “view” into the grid from anywhere else in the utility, including the boardroom. But all of these are relatively small-scale installations at this point. To distribute them across the national grid is going to take time and a tremendous amount of money. The transformation to a smart grid is under way and accelerating. However, to this point, the penetration is relatively small. Most
of the grid still is big and dumb.

Advanced metering infrastructure (AMI) actually was invented by utilities, although vendors serving the industry have greatly advanced the art since the mid-1990s. Utilities installed earlier-generation AMI, called automated meter reading (AMR) for about 50 percent of all customers, although the other 50 percent still were being read by meter readers traipsing through people’s yards.

AMI, which allows two-way communications with the meters (AMR is mostly one-way), is advancing rapidly, but still has reached less than 20 percent of American homes, according to research by AMI guru Howard Scott and Sierra Energy Group, the research and analysis division of Energy Central. Large-scale installations by Southern Company, Pacific Gas and Electric, Edison International and San Diego Gas and Electric, are pushing that percentage up rapidly in 2009, and other utilities were in various stages of pilots. The first installation of a true two-way metering system was at Kansas City Power & Light Co. (now Great Plains Energy) in the mid-1990s.

So the intelligent utility and smart grid were under development by utilities before politicians got into the act. However, the build-out was expected to take perhaps 30 years or more before completed down to the smallest municipal and co-operative utilities. Many of the smaller utilities haven’t even started pilots. Xcel Energy, Minneapolis, is building a smartgrid model in one city, Boulder, Col., but by May, 2009, two of the primary architects of the effort, Ray Gogel and Mike Carlson, had left Xcel. Austin Energy has parts of a smart grid installed, but it still reaches only a portion of Austin’s population and “home automation” reaches an even smaller proportion.

There are numerous “paper” models existent for these concepts. One, developed by Sierra Energy Group more than three years ago, is shown in Figure 1.

Major other portions of what is being envisioned by politicians have yet to be invented or developed. There is no reasonably priced, reasonably practical electric car, nor standardized connection systems to re-charge them. There are no large-scale transmission systems to reach remote windmill farms or solar-generating facilities and there is large-scale resistance from environmentalists to building such transmission facilities. Despite some political pronouncements, renewable generation, other than hydroelectric dams, still produces less than 3 percent of America’s electricity and that percentage is climbing very slowly.

Yes, the federal government was throwing some money at the build-out in early 2009, about $4 billion for smart grid and some $30-$45 billion at renewable energy. But these are drops in the bucket to the amount of money – estimated by responsible economists at $3 trillion or more – required just to build and replace the aging transmission systems and automate the grid. This is money utilities don’t have and can’t get without making the cost of electricity prohibitive for a large percentage of the population. Despite one political pronouncement, windmills in the Atlantic Ocean are not going to replace coal-fired generation in any conceivable time frame, certainly not in the four years of the current administration.

Then, you have global warming. As a political movement, global warming serves as a useful stick to the carrot of federal funding for renewable energy. However, the costs for the average American of any type of tax on carbon dioxide are likely to be very heavy.

In the midst of all this, utilities still have to go to public service commissions in all 50 states for permission to raise rates. If they can’t raise rates – something resisted by most PSCs – they can’t generate the cash to pay for this massive build-out. PSC commissioners also are politicians, by the way, with an average tenure of only about four years, which is hardly long enough to learn how the industry works, much less how to radically reconfigure it in a similar time-frame.

Despite a shortage of engineers and other highly skilled workers in the United States, the smart grid and intelligent utilities will be built in the U.S. But it is a generational transformation, not something that can be done overnight. To expect the utility industry to gear up to get all this done in time to “pull us out” of the most serious recession of modern times just isn’t realistic – it’s political. Add to the scale of the problem political wrangling over every concept and every dollar, mix in a lot of government bureaucracy that takes months to decide how to distribute deficit dollars, and throw in carbon mitigation for global warming and it’s a recipe for disaster. Expect the lights to start flickering along about…now. Whether they only flicker or go out for longer periods is out of the hands of utilities – it’s become a political issue.

Measuring Smart Metering’s Progress

Smart or advanced electricity metering, using a fixed network communications path, has been with us since pioneering installations in the US Midwest in the mid-1980s. That’s 25 years ago, during which time we have seen incredible advancements in information and communication technologies.

Remember the technologies of 1985? The very first mobile phones were just being introduced. They weighed as much as a watermelon and cost nearly $9,000 in today’s dollars. SAP had just opened its first sales office outside of Germany, and Oracle had fewer than 450 employees. The typical personal computer had a 10 megabyte hard drive, and a dot-com Internet domain was just a concept.

We know how much these technologies have changed since then, how they have been embraced by the public, and (to some degree at least) where they are going in the future. This article looks at how smart metering technology has developed over the same period. What has been the catalyst for advancements? And, most important, what does that past tell us about the future of smart metering?

Peter Drucker once said that “trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.”

Let’s take a brief look out the back window, before driving forward.

Past Developments

Developments in the parallel field of wireless communications, with its strong standards base, are readily delineated into clear technology generations. While we cannot as easily pinpoint definitive phases of smart metering technology, we can see some major transitions and discern patterns from the large deployments illustrated in Figure 1, and perhaps, even identify three broad smart metering “generations.”

The first generation is probably the clearest to delineate. The first 10 years of smart metering deployments (until about 2004) were all one-way wireless, limited two-way wireless, or very low-bandwidth power-line carrier communications (PLC) to the meter, concentrated in the U.S. The market at this time was dominated by Distribution Control Systems, Inc. (DCSI) and, what was then, CellNet Data Systems, Inc. Itron Fixed Network 2.0 and Hunt Technologies’ TS1 solution would also fit into this generation.

More than technology, the strongest characteristic of this first generation is the limited scope of business benefits considered. With the exception of Puget Sound Energy’s time-of-use pricing program, the business case for these early deployments was focused almost exclusively on reducing meter reading costs. Effectively, these early deployments reproduced the same business case as mobile automated meter reading (AMR).

By 2004, approximately 10 million of these smart meters had been installed in the U.S. (about 7 percent of the national total); however, whatever public perception of smart metering there was at the time was decidedly mixed. The deployments received scant media coverage, which focused almost solely on troubled time-of-use pricing programs, perhaps digressing briefly to cover smart metering vendor mergers and lawsuits. But generally smart meters, by any name, were unknown among the general population.

Today’s Second Generation

By the early 2000s, some utilities, notably PPL and PECO, both in Pennsylvania, were beginning to expand the use of their smart metering infrastructure beyond the simple meter-to-cash process. With incremental enhancements to application integration that were based on first generation technology, they were initiating projects to use smart metering to: transform outage identification and response; explore more frequent reading and more granular data; and improve theft detection.

These initiatives were the first to give shape to a new perspective on smart metering, but it was power company Enel’s dramatic deployment of 30 million smart meters across Italy that crystallized the second generation.

For four years leading to 2005, Enel fully deployed key technology advancements, such as universal and integrated remote disconnect and load limiting, that previously did not exist on any real scale. These changes enabled a dramatically broader scope of business benefits as this was the first fully deployed solution designed from the ground up to look well beyond reducing meter reading costs.

The impact of Enel’s deployment and subsequent marketing campaign on smart metering developments in other countries should not be underestimated, particularly among politicians and regulators outside the U.S. In European countries, particularly Italy, and regions such as Scandinavia, the same model (and in many cases the same technology) was deployed. Enel demonstrated to the rest of the world what could be done without any high-profile public backlash. It set a competitive benchmark that had policymakers in other countries questioning progress in their jurisdictions and challenging their own utilities to achieve the same.

North American Resurgence

As significant as Enel’s deployment was on the global development of smart metering, it is not the basis for today’s ongoing smart metering technology deployments now concentrated in North America.

More than the challenges of translating a European technology to North America, the business objectives and customer environments were different. As the Enel deployment came to an end, governments and regulators – particularly those in California and Ontario – were looking for smart metering technology to be the foundation for major energy conservation and peak-shifting programs. They expected the technology to support a broad range of pricing programs, provide on-demand reads within minutes, and gather hourly interval profile data from every meter.

Utilities responded. Pacific Gas & Electric (PG&E), with a total of 9 million electric and natural gas meters, kick-started the movement. Others, notably Southern California Edison (SCE), invested the time and effort to advance the technology, championing additions such as remote firmware upgrades and home area network support.

As a result, a near dormant North American smart metering market was revived in 2007. The standard functionality we see in most smart metering specifications today and the technology basis for most planned deployments in North America was established.

These technology changes also contributed to a shift in public awareness of smart meters. As smart metering was considered by more local utilities, and more widely associated with growing interest in energy conservation, media interest grew exponentially. Between 2004 and 2008, references to smart or advanced meters (carefully excluding smart parking meters) in the world’s major newspapers nearly doubled every year, to the point where the technology is now almost common knowledge in many countries.

The Coming Third Generation

In the 25 years since smart meters were first substantially deployed, the technology has progressed considerably. While progress has not been as rapid as advancements in consumer communications technologies, smart metering developments such as universal interval data collection, integrated remote disconnect and load limiting, remote firmware upgrades and links to a home network are substantial advancements.

All of these advancements have been driven by the combination of forward-thinking government policymakers, a supportive regulator and, perhaps most important, a large utility willing to invest the time and effort to understand and demand more from the vendor community.

With this understanding of the drivers, and based on the technology deployment plans, we can map out key future smart metering technology directions. We expect to see the next generation of smart metering exhibit two dominant differences from today’s technology. This includes increased standardization across the entire smart metering solution scope and changes to back-office systems architecture that enables the extended benefits of smart metering.

Increased Standardization

The transition to the next generation of smart metering will be known more for its changes to how a smart meter works, rather than what a smart meter does.

The direct functions of a smart meter appear to be largely set. We expect to see continued incremental advancements in data quality and read reliability; improved power quality measurement; and more universal deployment of a remote disconnect and load limiting.

But how a smart meter provides these functions will further change. We believe the smart meter will become a much more integrated part of two networks: one inside the home; the other along the electricity distribution network.

Generally, an expectation of standards for communication from the meter into a home area network is well accepted by the industry – although the actual standard to be applied is still in question. As this home area network develops, we expect a smart meter to increasingly become a member of this network, rather than the principal mechanism in creating one.

As other smart grid devices are deployed further down the low voltage distribution system, we expect utilities to demand that the meter conform to these network communications standards. In other words, utilities will continue to reject the idea that other types of smart grid devices – those with even greater control of the electrical network – be incorporated into a proprietary smart meter local area network.

It appears that most of this drive to standardization will not be led by utilities in North America. For one, technology decisions in North America are rapidly being completed (for this first round of replacements, at least). The recent Federal Regulatory Energy Commission (FERC) staff report, entitled “2008 Assessment of Demand Response and Advanced Metering” found that of the 145 million meters in the U.S., utilities have already contracted to replace nearly 52 million with smart meters over the next five to seven years.

IBM’s analysis indicated that larger utilities have declared plans to replace these meters even faster – approximately 33 million smart meters by 2013. The meter communications approach, and quite often the vendors chosen for these deployments, has typically already been selected, leaving little room to fundamentally change the underlying technological approach.

Outside of Worldwide Interoperability for Microwave Access (WiMAX) experiments by utilities such as American Electric Power (AEP) and those in Ontario, and shared services initiatives in Texas and Ontario, none of the remaining large North American utilities appear to have a compelling need to drive dramatic technology advancements, given rate and time pressures from regulators.

Conversely, a few very large European programs are poised to push the technology toward much greater standards adoption:

  • EDF in France has started a trial of 300,000 meters following standard PLC communications from the meter to the concentrator. The full deployment to all 35 million EDF meters is expected to follow.
  • The U.K. government recently announced a mandatory replacement of both electricity and natural gas meters for all 46 million customers between 2010 and 2020. The U.K.’s unique market structure with competitive retailers having responsibility for meter ownership and operation is driving interoperability standards beyond currently available technology.
  • With its PRIME initiative, the Spanish utility Iberdrola plans to develop a new PLC-based, open standard for smart metering. It is starting with a pilot project in 2009, leading to full deployment to more than 10 million residential customers.

The combination of these three smart metering projects alone will affect 91 million smart meters, equal to two thirds of the total U.S. market. This European focus is expected to grow now that the Iberdrola project has taken the first steps to be the basis for the European Commission’s Open Meter initiative, involving 19 partners from seven European countries.

Rethinking Utility System Architectures

Perhaps the greatest changes to future smart metering systems will have nothing to do with the meter itself.

To date, standard utility applications for customer care and billing, outage management, and work management have been largely unchanged by smart metering. In fact, to reduce risk and meet schedules, utilities have understandably shielded legacy systems from the changes needed to support a smart meter rollout or new tariffs. They have looked to specialized smart metering systems, particularly meter data management systems (MDMS), to bridge the gap between a new smart metering infrastructure and their legacy systems.

As a result, many of the potential benefits of a smart metering infrastructure have yet to be fully realized. For instance, billing systems still operate on cycles set by past meter reading routes. Most installed outage management applications are unable to take advantage of a direct near-real-time connection to nearly every end point.

As application vendors catch up, we expect the third generation of smart meters to be characterized by changes to the overall utility architectures and the applications that comprise them. As applications are enhanced, and enterprise architectures adapted to the smart grid, we expect to see significant architectural changes, such as:

  • Much of the message brokering functions from disparate head-end systems to utility applications in an MDMS will migrate to the utility’s service bus.
  • As smart meters increasingly become devices on a standards-based network, more general network management applications now widely deployed for telecommunications networks will supplement vendor head-end systems.
  • Complex estimating and editing functions will become less valuable as the technology in the field becomes more reliable.
  • Security of the system, from home network to the utility firewall, needs to meet the much higher standards associated with grid operations, rather than those arising from the current meter-as-the-cash-register perspective.
  • Add-on functionality provided by some niche vendors will migrate to larger utility systems as they evolve to a smart metering world. For instance, Web presentment of interval data to customers will move from dedicated sites to become a broad part of utilities’ online offerings.

Conclusions

Looking back at 25 years of smart metering technology development, we can see that while it has progressed, it has not developed at the pace of the consumer communications and computing technologies they rely upon – and for good reasons.

Utilities operate under a very different investment timeframe compared to consumer electronics; decisions made by utilities today need to stand for decades, rather than mere months. While consumer expectations of technology and service continue to grow with each generation, in the regulated electricity distribution industry, any customer demands are often filtered through a blurry political and regulatory lens.

Even with these constraints, smart metering technology has evolved rapidly, and will continue to change in the future. The next generation, with increased standardized integration with other networks and devices, as well as changes to back office systems, will certainly transform what we now call smart metering. So much so, that much sooner than 25 years from now, those looking back at today’s smart meters may very well see them as we now see those watermelon-sized cell phones of the 1980’s.

How Intelligent Is Your Grid?

Many people in the utility industry see the intelligent grid — an electric transmission and distribution network that uses information technology to predict and adjust to network changes — as a long-term goal that utilities are still far from achieving. Energy Insights research, however, indicates that today’s grid is more intelligent than people think. In fact, utilities can begin having the network of the future today by better leveraging their existing resources and focusing on the intelligent-grid backbone.

DRIVERS FOR THE INTELLIGENT GRID

Before discussing the intelligent grid backbone, it’s important to understand the drivers directing the intelligent grid’s progress. While many groups — such as government, utilities and technology companies — may be pushing the intelligent grid forward, they are also slowing it down. Here’s how:

  • Government. With the 2005 U.S. Energy Policy Act and the more recent 2007 Energy Independence and Security Act, the federal government has acknowledged the intelligent grid’s importance and is supporting investment in the area. Furthermore, public utility commissions (PUCs) have begun supporting intelligent grid investments like smart metering. At the same time, however, PUCs have a duty to maintain reasonable prices. Since utilities have not extensively tested the benefits of some intelligent grid technologies, such as distribution line sensors, many regulators hesitate to support utilities investing in intelligent grid technologies beyond smart metering.
  • Utilities. Energy Insights research indicates that information technology, in general, enables utilities to increase operational efficiency and reduce costs. For this reason, utilities are open to information technology; however, they’re often looking for quick cost recovery and benefits. Many intelligent grid technologies provide longer-term benefits, making them difficult to cost-justify over the short term. Since utilities are risk-aware, this can make intelligent grid investments look riskier than traditional information technology investments.
  • Technology. Although advanced enough to function on the grid today, many intelligent grid technologies could become quickly outdated thanks to the rapidly developing marketplace. What’s more, the life span of many intelligent grid technologies is not as long as those of traditional grid assets. For example, a smart meter’s typical life span is about 10 to 15 years, compared with 20 to 30 years for an electro-mechanical meter.

With strong drivers and competing pressures like these, it’s not a question of whether the intelligent grid will happen but when utilities will implement new technologies. Given the challenges facing the intelligent grid, the transition will likely be more of an evolution than a revolution. As a result, utilities are making their grids more intelligent today by focusing on the basics, or the intelligent grid backbone.

THE INTELLIGENT GRID BACKBONE

What comprises this backbone? Answering this question requires a closer look at how intelligence changes the grid. Typically, a utility has good visibility into the operation of its generation and transmission infrastructure but poor visibility into its distribution network. As a result, the utility must respond to a changing distribution network based on very limited information. Furthermore, if a grid event requires attention — such as in the case of a transformer failure — people must review information, decide to act and then manually dispatch field crews. This type of approach translates to slower, less informed reactions to grid events.

The intelligent grid changes these reactions through a backbone of technologies — sensors, communication networks and advanced analytics — especially developed for distribution networks. To better understand these changes, we can imagine a scenario where a utility has an outage on its distribution network. As shown in Figure 1, additional grid sensors collect more information, making it easier to detect problems. Communications networks then allow sensors to convey the problem to the utility. Advanced analytics can efficiently process this information and determine more precisely where the fault is located, as well as automatically respond to the problem and dispatch field crews. These components not only enable faster, better-informed reactions to grid problems, they can also do real-time pricing, improve demand response and better handle distributed and renewable energy sources.

A CLOSER LOOK AT BACKBONE COMPONENTS

A deeper dive into each of these intelligent grid backbone technologies reveals how utilities are gaining more intelligence about their grid today.

Network sensors are important not only for real-time operations — such as locating faults and connecting distributed energy sources to the grid — but also for providing a rich historical data source to improve asset maintenance and load research and forecasting. Today, more utilities are using sensors to better monitor their distribution networks; however, they’re focused primarily on smart meters. The reason for this is that smart meters have immediate operational benefits that make them attractive for many utilities today, including reducing meter reader costs, offering accurate billing information, providing theft control and satisfying regulatory requirements. Yet this focus on smart meters has created a monitoring gap between the transmission network and the smart meter.

A slew of sensors are available from companies such as General Electric, ABB, PowerSense, GridSense and Serveron to fill this monitoring gap. Tracking everything from load balancing and transformer status to circuit breakers and tap changers, energized downed lines, high-impedance faults and stray voltage, and more, these sensors are able to fill the monitoring gap, yet utilities hesitate to invest in them because they lack the immediate operational benefits of smart meters.

By monitoring this gap, however, utilities will sustain longer-term grid benefits such as reduced generation capacity building. Utilities have found they can begin monitoring this gap by:

  • Prioritizing sensor investments. Customer complaints and regulatory pressure have pushed some utilities to take action for particular parts of their service territory. For example, one utility Energy Insights studied received numerous customer complaints about a particular feeder’s reliability, so the utility invested in line sensors for that area. Another utility began considering sensor investments in troubled areas of its distribution network when regulators demanded that the utility raise its System Average Interruption Frequency Index (SAIFI) and System Average Interruption Duration Index (SAIDI) ratings from the bottom 50 percent to the top 25 percent of benchmarked utilities. By focusing on such areas, utilities can achieve “quick wins” with sensors and build utility confidence by using additional sensors on their distribution grid.
  • Realizing it’s all about compromise. Even in high-priority areas, it may not make financial sense for a utility to deploy the full range of sensors for every possible asset. In some situations, utilities may target a particular area of the service territory with a higher density of sensors. For example, a large U.S. investor-owned utility with a medium voltage-sensing program placed a high density of sensors along a specific section of its service territory. On the other hand, utilities might cover a broader area of service territory with fewer sensors, similar to the approach taken by a large investor-owned utility Energy Insights looked at that monitored only transformers across its service territory.
  • Rolling in sensors with other intelligent grid initiatives. Some utilities find ways to combine their smart metering projects with other distribution network sensors or to leverage existing investments that could support additional sensors. One utility that Energy Insights looked at installed transformer sensors along with a smart meter initiative and leveraged the communications networks it used for smart metering.

While sensors provide an important means of capturing information about the grid, communication networks are critical to moving that information throughout the intelligent grid — whether between sensors or field crews. Typically, to enable intelligent grid communications, utilities must either build new communications networks to bring intelligence to the existing grid or incorporate communication networks into new construction. Yet utilities today are also leveraging existing or recently installed communications networks to facilitate more sophisticated intelligent grid initiatives such as the following:

  • Smart metering and automated meter-reading (AMR) initiatives. With the current drive to install smart meters, many utilities are covering their distribution networks with communications infrastructure. Furthermore, existing AMR deployments may include communications networks that can bring data back to the utility. Some utilities are taking advantage of these networks to begin plugging other sensors into their distribution networks.
  • Mobile workforce. The deployment of mobile technologies for field crews is another hot area for utilities right now. Utilities are deploying cellular networks for field crew communications for voice and data. Although utilities have typically been hesitant to work with third-party communications providers, they’ve become more comfortable with outside providers after using them for their mobile technologies. Since most of the cellular networks can provide data coverage as well, some utilities are beginning to use these providers to transmit sensor information across their distribution networks.

Since smart metering and mobile communications networks are already in place, the incremental cost of installing sensors on these networks is relatively low. The key is making sure that different sensors and components can plug into these networks easily (for example, using a standard communications protocol).

The last key piece of the intelligent grid backbone is advanced analytics. Utilities are required to make quick decisions every day if they’re to maintain a safe and reliable grid, and the key to making such decisions is being well informed. Intelligent grid analytics can help utilities quickly process large amounts of data from sensors so that they can make those informed decisions. However, how quickly a decision needs to be made depends on the situation. Intelligent grid analytics assist with two types of decisions: very quick decisions (veQuids) and quick decisions (Quids). veQuids are made in milliseconds by computers and intelligent devices analyzing complex, real-time data – an intelligent grid vision that’s still a future development for most utilities.

Fortunately, many proactive decisions about the grid don’t have to be made in milliseconds. Many utilities today can make Quids — often manual decisions — to predict and adjust to network changes within a time frame of minutes, days or even months.

no matter how quick the decision, however, all predictive efforts are based on access to good-quality data. In putting their Quid capabilities to use today — in particular for predictive maintenance and smart metering — utilities are building not only intelligence about their grids but also a foundation for providing more advanced veQuids analytics in the future through the following:

  • The information foundation. Smart metering and predictive maintenance require utilities to collect not only more data but also more real-time data. Smart metering also helps break down barriers between retail and operational data sources, which in turn creates better visibility across many data sources to provide a better understanding of a complex grid.
  • The automation transition. To make the leap between Quids and veQuids requires more than just better access to more information — it also requires automation. While fully automated decision-making is still a thing of the future, many utilities are taking steps to compile and display data automatically as well as do some basic analysis, using dashboards from providers such as OSIsoft and Obvient Strategies to display high-level information customized for individual users. The user then further analyzes the data, and makes decisions and takes action based on that analysis. Many utilities today use the dashboard model to monitor critical assets based on both real-time and historical data.

ENSURING A MORE INTELLIGENT GRID TODAY AND TOMORROW

As these backbone components show, utilities already have some intelligence on their grids. now, they’re building on that intelligence by leveraging existing infrastructure and resources — whether it’s voice communications providers for data transmission or Quid resources to build a foundation for the veQuids of tomorrow. In particular, utilities need to look at:

  • Scalability. Utilities need to make sure that whatever technologies they put on the grid today can grow to accommodate larger portions of the grid in future.
  • Flexibility. Given rapid technology changes in the marketplace, utilities need to make sure their technology is flexible and adaptable. For example, utilities should consider smart meters that have the ability to change out communications cards to allow for new technologies.
  • Integration. due to the evolutionary nature of the grid, and with so many intelligent grid components that must work together (intelligent sensors at substations, transformers and power lines; smart meters; and distributed and renewable energy sources), utilities need to make sure these disparate components can work with one another. Utilities need to consider how to introduce more flexibility into their intelligent grids to accommodate the increasingly complex network of devices.

As today’s utilities employ targeted efforts to build intelligence about the grid, they must keep in mind that whatever action they take today – no matter how small – must ultimately help them meet the demands of tomorrow.

Policy and Regulatory Initiatives And the Smart Grid

Public policy is commonly defined as a plan of action designed to guide decisions for achieving a targeted outcome. In the case of smart grids, new policies are needed if smart grids are actually to become a reality. This statement may sound dire, given the recent signing into law of the 2007 Energy Independence and Security Act (EISA) in the United States. And in fact, work is underway in several countries to encourage smart grids and smart grid components such as smart metering. However, the risk still exists that unless stronger policies are enacted, grid modernization investments will fail to leverage the newer and better technologies now emerging, and smart grid efforts will never move beyond demonstration projects. This would be an unfortunate result when you consider the many benefits of a true smart grid: cost savings for the utility, reduced bills for customers, improved reliability and better environmental stewardship.

REGIONAL AND NATIONAL EFFORTS

As mentioned above, several regions are experimenting with smart grid provisions. At the national level, the U.S. federal government has enacted two pieces of legislation that support advanced metering and smart grids. The Energy Policy Act of 2005 directed U.S. utility regulators to consider time-of-use meters for their states. The 2007 EISA legislation has several provisions, including a list of smart grid goals to encourage two-way, real-time digital networks that stretch from a consumer’s home to the distribution network. The law also provides monies for regional demonstration projects and matching grants for smart grid investments. The EISA legislation also mandates the development of an “interoperability framework.”

In Europe, the European Union (E.U.) introduced a strategic energy technology plan in 2006 for the development of a smart electricity system over the next 30 years. The European Technology Platform organization includes representatives from industry, transmission and distribution system operators, research bodies and regulators. The organization has identified objectives and proposes a strategy to make the smart grid vision a reality.

Regionally, several U.S. states and Canadian provinces are focused on smart grid investments. In Canada, the Ontario Energy Board has mandated smart meters, with meter installation completion anticipated by 2010. In Texas, the Public Utilities Commission of Texas (PUCT) has finalized advanced metering legislation that authorizes metering cost recovery through surcharges. The PUCT also stipulated key components of an advanced metering system: two-way communications, time-date stamp, remote connect/disconnect, and access to consumer usage for both the consumer and the retail energy provider. The Massachusetts State Senate approved an energy bill that includes smart grid and time-of-use pricing. The bill requires that utilities submit a plan by Sept. 1, 2008, to the Massachusetts Public Utilities Commission, establishing a six-month pilot program for a smart grid. Most recently, California, Washington state and Maryland all introduced smart grid legislation.

AN ENCOMPASSING VISION

While these national and regional examples represent just a portion of the ongoing activity in this area, the issue remains that smart grid and advanced metering pilot programs do not guarantee a truly integrated, interoperable, scalable smart grid. Granted, a smart grid is not achieved overnight, but an encompassing smart grid vision should be in place as modernization and metering decisions are made, so that investment is consistent with the plan in mind. Obviously, challenges – such as financing, system integration and customer education – exist in moving from pilot to full grid deployment. However, many utility and regulatory personnel perceive these challenges to be ones of costs and technology readiness.

The costs are considerable. KEMA, the global energy consulting firm, estimates that the average cost of a smart meter project (representing just a portion of a smart grid project) is $775 million. The E.U.’s Strategic Energy Technology Plan estimates that the total smart grid investment required could be as much as $750 billion. These amounts are staggering when you consider that the market capitalization of all U.S. investor-owned electric utilities is roughly $550 billion. However, they’re not nearly as significant when you subtract the costs of fixing the grid using business-as-usual methods. Transmission and distribution expenditures are occurring with and without intelligence. The Energy Information Administration (EIA) estimates that between now and 2020 more than $200 billion will be spent to maintain and expand electricity transmission and distribution infrastructures in the United States alone.

Technology readiness will always be a concern in large system projects. Advances are being made in communication, sensor and security technologies, and IT. The Federal Communications Commission is pushing for auctions to accelerate adoption of different communication protocols. Price points are decreasing for pervasive cellular communication networks. Electric power equipment manufacturers are utilizing the new IEC 61850 standard to ensure interoperability among sensor devices. vendors are using approaches for security products that will enable north American Electric Reliability Corp. (nERC) and critical infrastructure protection (CIP) compliance.

In addition, IT providers are using event-driven architecture to ensure responsiveness to external events, rather than processing transactional events, and reaching new levels with high-speed computer analytics. leading service-oriented architecture companies are working with utilities to establish the underlying infrastructure critical to system integration. Finally, work is occurring in the standards community by the E.U., the Gridwise Architecture Council (GAC), Intelligrid, the national Energy Technology laboratory (nETl) and others to create frameworks for linking communication and electricity interoperability among devices, systems and data flows.

THE TIME IS NOW

These challenges should not halt progress – especially when one considers the societal benefits. Time stops for no one, and certainly in the case of the energy sector that statement could not be more accurate. Energy demand is increasing. The Energy Information Administration estimates that annual energy demand will increase roughly 50 percent over the next 25 years. Meanwhile, the debate over global warming seems to have waned. Few authorities are disputing the escalating concentrations of several greenhouse gases due to the burning of fossil fuels. The E.U. is attempting to decrease emissions through its 2006 Energy Efficiency directive. Many industry observers in the United States believe that there will likely be federal regulation of greenhouse gases within the next three years.

A smart grid would address many of these issues, giving options to the consumer to manage their usage and costs. By optimizing asset utilization, the smart grid will provide savings in that there is less need to build more power plants to meet increased electricity demand. As a self-healing grid that detects, responds and restores functions, the smart grid can greatly reduce the economic impact of blackout and power interruption grid failures.

A smart grid that provides the needed power quality can ensure the strong and resilient energy infrastructure necessary for the 21st-century economy. A smart grid also offers consumers options for managing their usage and costs. Further, a smart grid will enable plug-and-play integration of renewables, distributed resources and control systems. lastly, a smart grid will better enable plug-and-play integration of renewables, distributed resources and control systems.

INCENTIVES FOR MODERNIZATION

despite all of these potential benefits, more incentives are needed to drive grid modernization efforts. Several mechanisms are available to encourage investment. Some utilities are already using or evaluating alternative rate structures such as net metering and revenue decoupling to give utilities and consumer incentives to use less energy. net metering awards energy incentives or credit for consumer-based renewables. And revenue decoupling is a mechanism designed to eliminate or reduce dependence of a utility’s revenues on sales. Other programs – such as energy-efficiency or demand-reduction incentives – motivate consumers and businesses to adopt long-term energy-efficient behaviors (such as using programmable thermostats) and to consider energy efficiency when using appliances and computers, and even operating their homes.

Policy and regulatory strategy should incorporate these means and include others, such as accelerated depreciation and tax incentives. Accelerated depreciation encourages businesses to purchase new assets, since depreciation is steeper in the earlier years of the asset’s life and taxes are deferred to a later period. Tax incentives could be put in place for purchasing smart grid components. Utility Commissions could require utilities to consider all societal benefits, rather than just rate impacts, when crafting the business case. Utilities could take federal income tax credits for the investments. leaders could include smart grid technologies as a critical component of their overall energy policy.

Only when all of these policies and incentives are put in place will smart grids truly become a reality.

Collaborative Policy Making And the Smart Grid

A search on Google for the keywords smart grid returns millions of results. A list of organizations talking about or working on smart grid initiatives would likely yield similar results. Although meant humorously, this illustrates the proliferation of groups interested in redesigning and rebuilding the varied power infrastructure to support the future economy. Since building a smart infrastructure is clearly in the public’s interest, it’s important that all affected stakeholders – from utilities and legislators to consumers and regulators – participate in creating the vision, policies and framework for these critical and important investments.

One organization, the GridWise Alliance, was formed specifically to promote a broad collaborative effort for all interest groups shaping this agenda. Representing a consortium of more than 60 public organizations and private companies, GridWise Alliance members are aligned around a shared vision of a transformed and modern electric system that integrates infrastructure, processes, devices, information and market structure so that energy can be generated, distributed and consumed more reliably and cost-effectively.

From the time of its creation in 2003, the GridWise Alliance has focused on the federal legislative process to ensure that smart grid programs and policies were included in the priorities of the various federal agencies. The Alliance continues to focus on articulating to elected officials, public policy agencies and the private sector the urgent need to build a smarter 21st-century utility infrastructure. Last year, the Alliance provided significant input into the development of smart grid legislation, which was passed by both houses of Congress and signed into law by the President at the end of 2007. The Alliance has evolved to become one of the “go-to” parties for members of Congress and their staffs as they prepare for new legislation aimed at transforming to a modern and intelligent electricity grid.

The Alliance continues to demonstrate its effectiveness in various ways. The chair of the Alliance, Guido Bartels, joins representatives from seven other Alliance member companies in recently being named to the U.S. Department of Energy’s Electricity Advisory Committee (EAC). This organization is being established to “enhance leadership in electricity delivery modernization and provide senior-level counsel to DOE on ways that the nation can meet the many barriers to moving forward, including the deployment of smart grid technologies.” Another major area of focus is the national GridWeek conference. This year’s GridWeek 2008 is focused on “delivering sustainable energy.” The Alliance expects more than 800 participants to discuss and debate topics such as Enabling Energy Efficiency, Smart Grid in a Carbon Economy and Securing the Smart Grid.

Going forward, the Alliance will expand its reach by continuing to broaden its membership and by working with other U.S. stakeholder organizations to provide a richer understanding of the value and impacts of a smart grid. The Alliance is already working with organizations such as the NARUC-FERC Smart Grid Collaborative, the National Council of State legislators (NCSl), the National Governors’ Association (NGA), the American Public Power Association (APPA) and others. Beyond U.S. borders, the Alliance will continue to strengthen its relations with other smart grid organizations like those in the European Union and Australia to ensure that we’re gaining insight and best practices from other markets.

Collaboration such as that exemplified by the Alliance is critical for making effective and impactful public policy. The future of our nation’s electricity infrastructure, economy and, ultimately, health and safety depends on the leadership of organizations such as the GridWise Alliance.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

  1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:
    • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
    • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
    • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.
  2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:
    • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
    • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
    • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.
  3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.
    • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
    • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
    • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
    • >Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.
  4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:
    • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
    • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
    • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
    • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
    • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red flags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

Despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

  1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

Growing (or Shrinking) Trends in Nuclear Power Plant Construction

Around the world, the prospects for nuclear power generation are increasing – opportunities made clear by the number of currently under-construction nuclear plants that are smaller than those currently in the limelight. Offering advantages in certain situations, these smaller plants can more readily serve smaller grids as well as be used for distributed generation (with power plants located close to the demand centers and the main grid providing back-up). Smaller plants are also easier to finance, particularly in countries that are still in the early days of their nuclear power programs.

In recent years, development and licensing efforts have focused primarily on large, advanced reactors, due to their economies of scale and obvious application to developed countries with substantial grid infrastructure. Meanwhile, the wide scope for smaller nuclear plants has received less attention. However, of the 30 or more countries that are moving toward implementing nuclear power programs, most are likely to be looking initially for units under 1,000 MWe, and some for units of less than half that amount.

EXISTING DESIGNS

With that in mind, let’s take a look at some of the current designs.

There are many plants under 1,000 MWe now in operation, even if their replacements tend to be larger. (In 2007 four new units were connected to the grid – two large ones, one 202-MWe unit and one 655-MWe unit.) In addition, some smaller reactors are either on offer now or likely to be available in the next few years.

Five hundred to 700 MWe. There are several plants in this size range, including Westinghouse AP600 (which has U.S. design certification) and the Canadian Candu-6 (being built in Romania). In addition, China is building two CNP-600 units at Qinshan but does not plan to build any more of them. In Japan, Hitachi-GE has completed the design of a 600-MWe version of its 1,350-MWe ABWR, which has been operating for 10 years.

Two hundred and fifty to 500 MWe. And finally, in the 250- to 500-MWe category (output that is electric rather than heat), there are a few designs pending but little immediately on offer.

IRIS. Being developed by an international team led by Westinghouse in the United States, IRIS – or, more formally, International Reactor Innovative and Secure – is an advanced third-generation modular 335-MWe pressurized water reactor (PWR) with integral steam generators and a primary coolant system all within the pressure vessel. U.S. design certification is at pre-application stage with a view to final design approval by 2012 and deployment by 2015 to 2017.

VBER-300 PWR. This 295- to 325-MWe unit from Russia was designed by OKBM based on naval power plants and is now being developed as a land-based unit with the state-owned nuclear holding company Kazatomprom, with a view to exporting it. The first two units will be built in Southwest Kazakhstan under a Russian-Kazakh joint venture.

VK-300. This Russian-built boiling water reactor is being developed for co-generation of both power and district heating or heat for desalination (150 MWe plus 1675 GJ/hr) by the nuclear research and development organization NIKIET. The unit evolved from the VK-50 BWR at Dimitrovgrad but uses standard components from larger reactors wherever possible. In September 2007, it was announced that six of these units would be built at Kola and at Primorskaya in Russia’s far east, to start operating between 2017 and 2020.

NP-300 PWR. Developed in France from submarine power plants and aimed at export markets for power, heat and desalination, this Technicatome (Areva)- designed reactor has passive safety systems and can be built for applications of from 100 to 300 MWe.

China is also building a 300-MWe PWR (pressurized water reactor) nuclear power plant in Pakistan at Chasma (alongside another that started up in 2000); however, this is an old design based on French technology and has not been offered more widely. The new unit is expected to come online in 2011.

One hundred to 300 MWe. This category includes both conventional PWR and high-temperature gas-cooled reactors (HTRs); however, none in the second category are being built yet. Argentina’s CAREM nuclear power plant is being developed by CNEA and INVAP as a modular 27-MWe simplified PWR with integral steam generators designed to be used for electricity generation or for water desalination.

FLOATING PLANTS

After many years of promoting the idea, Russia’s state-run atomic energy corporation Rosatom has approved construction of a nuclear power plant on a 21,500-ton barge to supply 70 MWe of power plus 586 GJ/hr of heat to Severodvinsk, in the Archangelsk region of Russia. The contract to build the first unit was let by nuclear power station operator Rosenergoatom to the Sevmash shipyard in May 2006. Expected to cost $337 million (including $30 million already spent in design), the project is 80 percent financed by Rosenergoatom and 20 percent financed by Sevmash. Operation is expected to begin in mid-2010.

Rosatom is planning to construct seven additional floating nuclear power plants, each (like the initial one) with two 35- MWe OKBM KLT-40S nuclear reactors. Five of these will be used by Gazprom – the world’s biggest extractor of natural gas – for offshore oil and gas field development and for operations on Russia’s Kola and Yamal Peninsulas. One of these reactors is planned for 2012 commissioning at Pevek on the Chukotka Peninsula, and another is planned for the Kamchatka region, both in the far east of the country. Even farther east, sites being considered include Yakutia and Taimyr. Electricity cost is expected to be much lower than from present alternatives. In 2007 an agreement was signed with the Sakha Republic (Yakutia region) to build a floating plant for its northern parts, using smaller ABV reactors.

OTHER DESIGNS

On a larger scale, South Korea’s SMART is a 100-MWe PWR with integral steam generators and advanced safety features. It is designed to generate electricity and/or thermal applications such as seawater desalination. Indonesia’s national nuclear energy agency, Batan, has undertaken a pre-feasibility study for a SMART reactor for power and desalination on Madura Island. However, this awaits the building of a reference plant in Korea.

There are three high-temperature, gas-cooled reactors capable of being used for power generation, but much of the development impetus has been focused on the thermo-chemical production of hydrogen. Fuel for the first two consists of billiard ball-size pebbles that can withstand very high temperatures. These aim for a step-change in safety, economics and proliferation resistance.

China’s 200-MWe HTR-PM is based on a well-tested small prototype, and a two-module plant is due to start construction at Shidaowan in Shandong province in 2009. This reactor will use the conventional steam cycle to generate power. Start-up is scheduled for 2013. After the demonstration plant, a power station with 18 modules is envisaged.

Very similar to China’s plant is South Africa’s Pebble Bed Modular Reactor (PBMR), which is being developed by a consortium led by the utility Eskom. Production units will be 165 MWe. The PBMR will have a direct-cycle gas turbine generator driven by hot helium. The PBMR Demonstration unit is expected to start construction at Koeberg in 2009 and achieve criticality in 2013.

Both of these designs are based on earlier German reactors that have some years of operational experience. A U.S. design, the Modular helium Reactor (GT-MHR), is being developed in Russia; in its electrical application, each unit would directly drive a gas turbine giving 280 MWe.

These three designs operate at much higher temperatures than ordinary reactors and offer great potential as sources of industrial heat, including for the thermo-chemical production of hydrogen on a large scale. Much of the development thinking going into the PBMR has been geared to synthetic oil production by Sasol (South African Coal and Oil).

MODULAR CONSTRUCTION

The IRIS developers have outlined the economic case for modular construction of their design (about 330 MWe), and it’s an argument that applies similarly to other smaller units. These developers point out that IRIS, with its moderate size and simple design, is ideally suited for modular construction. The economy of scale is replaced here with the economy of serial production of many small and simple components and prefabricated sections. They expect that construction of the first IRIS unit will be completed in three years, with subsequent production taking only two years.

Site layouts have been developed with multiple single units or multiple twin units. In each case, units will be constructed with enough space around them to allow the next unit to be constructed while the previous one is operating and generating revenue. And even with this separation, the plant footprint can be very compact: a site with three IRIS single modules providing 1000 MWe is similar to or smaller in size than one with a comparable total power single unit.

Eventually, IRIS’ capital and production costs are expected to be comparable to those of larger plants. however, any small unit offers potential for a funding profile and flexibility impossible to achieve with larger plants. As one module is finished and starts producing electricity, it will generate positive cash fl ow for the construction of the next module. Westinghouse estimates that 1,000 MWe delivered by three IRIS units built at three-year intervals financed at 10 percent for 10 years requires a maximum negative cash flow of less than $700 million (compared with about three times that for a single 1,000-MWe unit). For developed countries, small modular units offer the opportunity of building as necessary; for developing countries, smaller units may represent the only option, since such country’s electric grids are likely unable to take 1,000-plus- MWe single units.

Distributed generation. The advent of reactors much smaller than those being promoted today means that reactors will be available to serve smaller grids and to be put into use for distributed generation (with power plants close to the demand centers and the main grid used for back-up). This does not mean, however, that large units serving national grids will become obsolete – as some appear to wish.

WORLD MARKET

One aspect of the global Nuclear Energy Partnership program is international deployment of appropriately sized reactors with desirable designs and operational characteristics (some of which include improved economics, greater safety margins, longer operating cycles with refueling intervals of up to three years, better proliferation resistance and sustainability). Several of the designs described earlier in this paper are likely to meet these criteria.

IRIS itself is being developed by an international team of 20 organizations from ten countries (Brazil, Croatia, Italy, Japan, Lithuania, Mexico, Russia, Spain, the United Kingdom and the United States) on four continents – a clear demonstration of how reactor development is proceeding more widely.

Major reactor designers and vendors are now typically international in character and marketing structure. To wit: the United Kingdom’s recent announcement that it would renew its nuclear power capacity was anticipated by four companies lodging applications for generic design approval – two from the United States (each with Japanese involvement), one from Canada and one from France (with German involvement). These are all big units, but in demonstrating the viability of late third-generation technology, they will also encourage consideration of smaller plants where those are most appropriate.

The Power of Prediction: Improving the Odds of a Nuclear Renaissance

After 30 years of disfavor in the United States, the nuclear power industry is poised for resurgence. With the passage of the Energy Policy Act of 2005, the specter of over $100 per barrel oil prices and the public recognition that global warming is real, nuclear power is now considered one of the most practical ways to clean up the power grid and help the United States reduce its dependence on foreign oil. The industry has responded with a resolve to build a new fleet of nuclear plants in anticipation of what has been referred to as a nuclear renaissance.

The nuclear power industry is characterized by a remarkable level of physics and mechanical science. Yet, given the confluence of a number of problematic issues – an aging workforce, the shortage of skilled trades, the limited availability of equipment and parts, and a history of late, over-budget projects – questions arise about whether the level of management science the industry plans to use is sufficient to navigate the challenges ahead.

According to data from the Energy Information Administration (EIA), nuclear power comprises 20 percent of the U.S. capacity, producing approximately 106 gigawatts (GW), with 66 plants that house 104 reactor units. To date, more than 30 new reactors have been proposed, which will produce a net increase of approximately 19 GW of nuclear capacity through 2030. Considering the growth of energy demand, this increased capacity will barely keep pace with increasing base load requirements.

According to Assistant Secretary for Nuclear Energy Dennis Spurgeon, we will need approximately 45 new reactors online by 2030 just to maintain 20 percent share of U.S. electricity generation nuclear power already holds.

Meanwhile, Morgan Stanley vice chairman Jeffrey Holzschuh is very positive about the next generation of nuclear power but warns that the industry’s future is ultimately a question of economics. “Given the history, the markets will be cautious,” he says.

As shown in Figures 1-3, nuclear power is cost competitive with other forms of generation, but its upfront capital costs are comparatively high. Historically, long construction periods have led to serious cost volatility. The viability of the nuclear power industry ultimately depends on its ability to demonstrate that plants can be built economically and reliably. Holzschuh predicts, “The first few projects will be under a lot of public scrutiny, but if they are approved, they will get funded. The next generation of nuclear power will likely be three to five plants or 30, nothing in between.”

Due to its cohesive identity, the nuclear industry is viewed by the public and investors as a single entity, making the fate of industry operators – for better or for worse – a shared destiny. For that reason, it’s widely believed that if these first projects suffer the same sorts of significant cost over-runs and delays experienced in the past, the projected renaissance for the industry will quickly revert to a return to the dark ages.

THE PLAYERS

Utility companies, regulatory authorities, reactor manufacturers, design and construction vendors, financiers and advocacy groups all have critical roles to play in creating a viable future for the nuclear power industry – one that will begin with the successful completion of the first few plants in the United States. By all accounts, an impressive foundation has been laid, beginning with an array of government incentives (as loan guarantees and tax credits) and simplified regulation to help jump-start the industry.

Under the Energy Policy Act of 2005, the U.S. Department of Energy has the authority to issue $18.5 billion in loan guarantees for new nuclear plants and $2 billion for uranium enrichment projects. In addition, there’s standby support for indemnification against Nuclear Regulatory Commission (NRC) and litigation-oriented delays for the first six advanced nuclear reactors. The Treasury Department has issued guidelines for an allocation and approval process for production tax credits for advanced nuclear: 1.8 cents per kilowatt-hour production tax credit for the first eight years of operation with the final rules to be issued in fiscal year 2008.

The 20-year renewal of the Price- Andersen Act in 2005 and anticipated future restrictions on carbon emissions further improve the comparative attractiveness of nuclear power. To be eligible for the 2005 production tax credits, a license application must be tendered to the NRC by the end of 2008 with construction beginning before 2014 and the plant placed in service before 2021.

The NRC has formulated an Office of New Reactors (NRO), and David Matthews, director of the Division of New Reactor Licensing, led the development of the latest revision of a new licensing process that’s designed to be more predictable by encouraging the standardization of plant designs, resolving safety and environmental issues and providing for public participation before construction begins. With a fully staffed workforce and a commitment to “enable the safe, secure and environmentally responsible use of nuclear power in meeting the nation’s future energy needs,” Matthews is determined to ensure that the NRC is not a risk factor that contributes to the uncertainty of projects but rather an organizing force that will create predictability. Matthews declares, “This isn’t your father’s NRC.”

This simplified licensing process consists of the following elements:

  • An early site permit (ESP) for locations of potential facilities.
  • Design certification (DC) for the reactor design to be used.
  • Combined operating license (COL) for the certified reactor as designed to be located on the site. The COL contains the inspections, tests, analyses and acceptance criteria (ITAAC) to demonstrate that the plant was built to the approved specifications.

According to Matthews, the best-case scenario for the time period between when a COL is docketed to the time the license process is complete is 33 months, with an additional 12 months for public hearings. When asked if anything could be done to speed this process, Matthews reported that every delay he’s seen thus far has been attributable to a cause beyond the NRC’s control. Most often, it’s the applicant that’s having a hard time meeting the schedule. Recently, approved schedules are several months longer than the best-case estimate.

The manufacturers of nuclear reactors have stepped up to the plate to achieve standard design certification for their nuclear reactors; four are approved, and three are in progress.

Utility companies are taking innovative approaches to support the NRC’s standardization principles, which directly impact costs. (Current conventional wisdom puts the price of a new reactor at between $4 billion and $5.5 billion, with some estimates of fully loaded costs as high as $7 billion.) Consortiums have been formed to support cross-company standardization around a particular reactor design. NuStart and UniStar are multi-company consortiums collaborating on the development of their COLs.

Leader of PPL Corp.’s nuclear power strategy Bryce Shriver – who recently announced PPL had selected UniStar to build its next nuclear facility – is impressed with the level of standardization UniStar is employing for its plants. From the specifics of the reactor design to the carpet color, UniStar – with four plants on the drawing board – intends to make each plant as identical as possible.

Reactor designers and construction companies are adding to the standardization with turnkey approaches, formulating new construction methods that include modular techniques; sophisticated scheduling and configuration management software; automated data; project management and document control; and designs that are substantially complete before construction begins. Contractors are taking seriously the lessons learned from plants built outside the United States, and they hope to leverage what they have learned in the first few U.S. projects.

The stewards of the existing nuclear fleet also see themselves as part of the future energy solution. They know that continued safe, high-performance operation of current plants is key to maintaining public and state regulator confidence. Most of the scheduled plants are to be co-located with existing nuclear facilities.

Financing nuclear plant construction involves equity investors, utility boards of directors, debt financiers and (ultimately) the ratepayers represented by state regulatory commissions. Despite the size of these deals, the financial community has indicated that debt financing for new nuclear construction will be available. The bigger issue lies with the investors. The more equity-oriented the risk (principally borne by utilities and ratepayers), the more caution there is about the structure of these deals. The debt financiers are relying on the utilities and the consortiums to do the necessary due diligence and put up the equity. There’s no doubt that the federal loan guarantees and subsidies are an absolute necessity, but this form of support is largely driven by the perceived risk of the first projects. Once the capability to build plants in a predictable way (in terms of time, cost, output and so on) has been demonstrated, market forces are expected to be very efficient at allocating capital to these kinds
of projects.

The final key to the realization of a nuclear renaissance is the public. Americans have become increasingly concerned about fossil fuels, carbon emissions and the nation’s dependence on foreign oil. The surge in oil prices has focused attention on energy costs and national security. Coal-based energy production is seen as an environmental issue. Although the United States has plenty of access to coal, dealing with carbon emissions using clean coal technology involves sequestering it and pumping it underground. PPL chairman Jim Miller describes the next challenge for clean coal as NUMBY – the “Not under my back yard” attitude the public is likely to adopt if forced to consider carbon pumped under their communities. Alternative energy sources such as wind, solar and geothermal enjoy public support, but they are not yet scalable for the challenge of cleaning up the grid. In general, the public wants clean, safe, reliable, inexpensive power.

THE RISKS

Will nuclear fill that bill and look attractive compared with the alternatives? Although progress has been made and the stage is set, critical issues remain, and they could become problematic. While the industry clearly sees and is actively managing some of these issues, there are others the industry sees but is not as certain about how to manage – and still others that are so much a part of the fabric of the industry that they go unrecognized. Any one of these issues could slow progress; the fact that there are several that could hit simultaneously multiplies the risk exponentially.

The three widely accepted risk factors for the next phase of nuclear power development are the variability of the cost of uranium, the availability of quality equipment for construction and the availability of well-trained labor. Not surprising for an industry that’s been relatively sleepy for several decades, the pipeline for production resources is weak – a problem compounded by the well-understood coming wave of retirements in the utility workforce and the general shortage of skilled trades needed to work on infrastructure projects. Combine these constraints with a surge in worldwide demand for power plants, and it’s easy to understand why the industry is actively pursuing strategies to secure materials and train labor.

The reactor designers, manufacturers and construction companies that would execute these projects display great confidence. They’re keen on the “turnkey solution” as a way to reduce the risk of multiple vendors pointing fingers when things go wrong. Yet these are the same firms that have been openly criticized for change orders and cost overruns. Christopher Crane, chief operating officer of the utility Exelon Corp., warned contractors in a recent industry meeting that the utilities would “not take all the risk this time around.” When faced with complicated infrastructure development in the past, vendors have often pointed to their expertise with complex projects. Is the development of more sophisticated scheduling and configuration management capability, along with the assignment of vendor accountability, enough to handle the complexity issue? The industry is aware of this limitation but does not as yet have strong management techniques for handling it effectively.

Early indications from regulators are that the COLs submitted to date are not meeting the NRC’s guidance and expectations in all regards, possibly a result of the applicants’ rush to make the 2008 year-end deadline for the incentives set forth in the Energy Policy Act. This could extend the licensing process and strain the resources of the NRC. In addition, the requirements of the NRC principally deal with public safety and environmental concerns. There are myriad other design requirements entailed in making a plant operate profitably.

The bigger risk is that the core strength of the industry – its ability to make significant incremental improvements – could also serve as the seed of its failure as it faces this next challenge. Investors, state regulators and the public are not likely to excuse serious cost overruns and time delays as they may have in the past. Utility executives are clear that nuclear is good to the extent that it’s economical. When asked what single concern they find most troubling, they often reply, “That we don’t know what we don’t know.”

What we do know is that there are no methods currently in place for beginning successful development of this next generation of nuclear power plants, and that the industry’s core management skill set may not be sufficient to build a process that differs from a “learn as you go” approach. Thus, it’s critical that the first few plants succeed – not just for their investors but for the entire industry.

THE OPPORTUNITY – KNOWING WHAT YOU DON’T KNOW

The vendors supporting the nuclear power industry represent some of the most prestigious engineering and equipment design and manufacturing firms in the world: Bechtel, Fluor, GE, Westinghouse, Areva and Hitachi. Despite this, the industry is not known for having a strong foundation in managing innovation. In a world that possesses complex physical capital and myriad intangible human assets, political forces and public opinion as well as technology are all required to get a plant to the point of producing power. Thus, more advanced management science could represent the missing piece of the puzzle for the nuclear power industry.

An advanced, decision-making framework can help utilities manage unpredictable events, increasing their ability to handle the planning and anticipated disruptions that often beset long, complex projects. By using advanced management science, the nuclear industry can take what it knows and create a learning environment to fi nd out more about what it doesn’t know, improving its odds for success.