The Power of Prediction: Improving the Odds of a Nuclear Renaissance

After 30 years of disfavor in the United States, the nuclear power industry is poised for resurgence. With the passage of the Energy Policy Act of 2005, the specter of over $100 per barrel oil prices and the public recognition that global warming is real, nuclear power is now considered one of the most practical ways to clean up the power grid and help the United States reduce its dependence on foreign oil. The industry has responded with a resolve to build a new fleet of nuclear plants in anticipation of what has been referred to as a nuclear renaissance.

The nuclear power industry is characterized by a remarkable level of physics and mechanical science. Yet, given the confluence of a number of problematic issues – an aging workforce, the shortage of skilled trades, the limited availability of equipment and parts, and a history of late, over-budget projects – questions arise about whether the level of management science the industry plans to use is sufficient to navigate the challenges ahead.

According to data from the Energy Information Administration (EIA), nuclear power comprises 20 percent of the U.S. capacity, producing approximately 106 gigawatts (GW), with 66 plants that house 104 reactor units. To date, more than 30 new reactors have been proposed, which will produce a net increase of approximately 19 GW of nuclear capacity through 2030. Considering the growth of energy demand, this increased capacity will barely keep pace with increasing base load requirements.

According to Assistant Secretary for Nuclear Energy Dennis Spurgeon, we will need approximately 45 new reactors online by 2030 just to maintain 20 percent share of U.S. electricity generation nuclear power already holds.

Meanwhile, Morgan Stanley vice chairman Jeffrey Holzschuh is very positive about the next generation of nuclear power but warns that the industry’s future is ultimately a question of economics. “Given the history, the markets will be cautious,” he says.

As shown in Figures 1-3, nuclear power is cost competitive with other forms of generation, but its upfront capital costs are comparatively high. Historically, long construction periods have led to serious cost volatility. The viability of the nuclear power industry ultimately depends on its ability to demonstrate that plants can be built economically and reliably. Holzschuh predicts, “The first few projects will be under a lot of public scrutiny, but if they are approved, they will get funded. The next generation of nuclear power will likely be three to five plants or 30, nothing in between.”

Due to its cohesive identity, the nuclear industry is viewed by the public and investors as a single entity, making the fate of industry operators – for better or for worse – a shared destiny. For that reason, it’s widely believed that if these first projects suffer the same sorts of significant cost over-runs and delays experienced in the past, the projected renaissance for the industry will quickly revert to a return to the dark ages.

THE PLAYERS

Utility companies, regulatory authorities, reactor manufacturers, design and construction vendors, financiers and advocacy groups all have critical roles to play in creating a viable future for the nuclear power industry – one that will begin with the successful completion of the first few plants in the United States. By all accounts, an impressive foundation has been laid, beginning with an array of government incentives (as loan guarantees and tax credits) and simplified regulation to help jump-start the industry.

Under the Energy Policy Act of 2005, the U.S. Department of Energy has the authority to issue $18.5 billion in loan guarantees for new nuclear plants and $2 billion for uranium enrichment projects. In addition, there’s standby support for indemnification against Nuclear Regulatory Commission (NRC) and litigation-oriented delays for the first six advanced nuclear reactors. The Treasury Department has issued guidelines for an allocation and approval process for production tax credits for advanced nuclear: 1.8 cents per kilowatt-hour production tax credit for the first eight years of operation with the final rules to be issued in fiscal year 2008.

The 20-year renewal of the Price- Andersen Act in 2005 and anticipated future restrictions on carbon emissions further improve the comparative attractiveness of nuclear power. To be eligible for the 2005 production tax credits, a license application must be tendered to the NRC by the end of 2008 with construction beginning before 2014 and the plant placed in service before 2021.

The NRC has formulated an Office of New Reactors (NRO), and David Matthews, director of the Division of New Reactor Licensing, led the development of the latest revision of a new licensing process that’s designed to be more predictable by encouraging the standardization of plant designs, resolving safety and environmental issues and providing for public participation before construction begins. With a fully staffed workforce and a commitment to “enable the safe, secure and environmentally responsible use of nuclear power in meeting the nation’s future energy needs,” Matthews is determined to ensure that the NRC is not a risk factor that contributes to the uncertainty of projects but rather an organizing force that will create predictability. Matthews declares, “This isn’t your father’s NRC.”

This simplified licensing process consists of the following elements:

  • An early site permit (ESP) for locations of potential facilities.
  • Design certification (DC) for the reactor design to be used.
  • Combined operating license (COL) for the certified reactor as designed to be located on the site. The COL contains the inspections, tests, analyses and acceptance criteria (ITAAC) to demonstrate that the plant was built to the approved specifications.

According to Matthews, the best-case scenario for the time period between when a COL is docketed to the time the license process is complete is 33 months, with an additional 12 months for public hearings. When asked if anything could be done to speed this process, Matthews reported that every delay he’s seen thus far has been attributable to a cause beyond the NRC’s control. Most often, it’s the applicant that’s having a hard time meeting the schedule. Recently, approved schedules are several months longer than the best-case estimate.

The manufacturers of nuclear reactors have stepped up to the plate to achieve standard design certification for their nuclear reactors; four are approved, and three are in progress.

Utility companies are taking innovative approaches to support the NRC’s standardization principles, which directly impact costs. (Current conventional wisdom puts the price of a new reactor at between $4 billion and $5.5 billion, with some estimates of fully loaded costs as high as $7 billion.) Consortiums have been formed to support cross-company standardization around a particular reactor design. NuStart and UniStar are multi-company consortiums collaborating on the development of their COLs.

Leader of PPL Corp.’s nuclear power strategy Bryce Shriver – who recently announced PPL had selected UniStar to build its next nuclear facility – is impressed with the level of standardization UniStar is employing for its plants. From the specifics of the reactor design to the carpet color, UniStar – with four plants on the drawing board – intends to make each plant as identical as possible.

Reactor designers and construction companies are adding to the standardization with turnkey approaches, formulating new construction methods that include modular techniques; sophisticated scheduling and configuration management software; automated data; project management and document control; and designs that are substantially complete before construction begins. Contractors are taking seriously the lessons learned from plants built outside the United States, and they hope to leverage what they have learned in the first few U.S. projects.

The stewards of the existing nuclear fleet also see themselves as part of the future energy solution. They know that continued safe, high-performance operation of current plants is key to maintaining public and state regulator confidence. Most of the scheduled plants are to be co-located with existing nuclear facilities.

Financing nuclear plant construction involves equity investors, utility boards of directors, debt financiers and (ultimately) the ratepayers represented by state regulatory commissions. Despite the size of these deals, the financial community has indicated that debt financing for new nuclear construction will be available. The bigger issue lies with the investors. The more equity-oriented the risk (principally borne by utilities and ratepayers), the more caution there is about the structure of these deals. The debt financiers are relying on the utilities and the consortiums to do the necessary due diligence and put up the equity. There’s no doubt that the federal loan guarantees and subsidies are an absolute necessity, but this form of support is largely driven by the perceived risk of the first projects. Once the capability to build plants in a predictable way (in terms of time, cost, output and so on) has been demonstrated, market forces are expected to be very efficient at allocating capital to these kinds of projects.

The final key to the realization of a nuclear renaissance is the public. Americans have become increasingly concerned about fossil fuels, carbon emissions and the nation’s dependence on foreign oil. The surge in oil prices has focused attention on energy costs and national security. Coal-based energy production is seen as an environmental issue. Although the United States has plenty of access to coal, dealing with carbon emissions using clean coal technology involves sequestering it and pumping it underground. PPL chairman Jim Miller describes the next challenge for clean coal as NUMBY – the “Not under my back yard” attitude the public is likely to adopt if forced to consider carbon pumped under their communities. Alternative energy sources such as wind, solar and geothermal enjoy public support, but they are not yet scalable for the challenge of cleaning up the grid. In general, the public wants clean, safe, reliable, inexpensive power.

THE RISKS

Will nuclear fill that bill and look attractive compared with the alternatives? Although progress has been made and the stage is set, critical issues remain, and they could become problematic. While the industry clearly sees and is actively managing some of these issues, there are others the industry sees but is not as certain about how to manage – and still others that are so much a part of the fabric of the industry that they go unrecognized. Any one of these issues could slow progress; the fact that there are several that could hit simultaneously multiplies the risk exponentially.

The three widely accepted risk factors for the next phase of nuclear power development are the variability of the cost of uranium, the availability of quality equipment for construction and the availability of well-trained labor. Not surprising for an industry that’s been relatively sleepy for several decades, the pipeline for production resources is weak – a problem compounded by the well-understood coming wave of retirements in the utility workforce and the general shortage of skilled trades needed to work on infrastructure projects. Combine these constraints with a surge in worldwide demand for power plants, and it’s easy to understand why the industry is actively pursuing strategies to secure materials and train labor.

The reactor designers, manufacturers and construction companies that would execute these projects display great confidence. They’re keen on the “turnkey solution” as a way to reduce the risk of multiple vendors pointing fingers when things go wrong. Yet these are the same firms that have been openly criticized for change orders and cost overruns. Christopher Crane, chief operating officer of the utility Exelon Corp., warned contractors in a recent industry meeting that the utilities would “not take all the risk this time around.” When faced with complicated infrastructure development in the past, vendors have often pointed to their expertise with complex projects. Is the development of more sophisticated scheduling and configuration management capability, along with the assignment of vendor accountability, enough to handle the complexity issue? The industry is aware of this limitation but does not as yet have strong management techniques for handling it effectively.

Early indications from regulators are that the COLs submitted to date are not meeting the NRC’s guidance and expectations in all regards, possibly a result of the applicants’ rush to make the 2008 year-end deadline for the incentives set forth in the Energy Policy Act. This could extend the licensing process and strain the resources of the NRC. In addition, the requirements of the NRC principally deal with public safety and environmental concerns. There are myriad other design requirements entailed in making a plant operate profitably.

The bigger risk is that the core strength of the industry – its ability to make significant incremental improvements – could also serve as the seed of its failure as it faces this next challenge. Investors, state regulators and the public are not likely to excuse serious cost overruns and time delays as they may have in the past. Utility executives are clear that nuclear is good to the extent that it’s economical. When asked what single concern they find most troubling, they often reply, “That we don’t know what we don’t know.”

What we do know is that there are no methods currently in place for beginning successful development of this next generation of nuclear power plants, and that the industry’s core management skill set may not be sufficient to build a process that differs from a “learn as you go” approach. Thus, it’s critical that the first few plants succeed – not just for their investors but for the entire industry.

THE OPPORTUNITY – KNOWING WHAT YOU DON’T KNOW

The vendors supporting the nuclear power industry represent some of the most prestigious engineering and equipment design and manufacturing firms in the world: Bechtel, Fluor, GE, Westinghouse, Areva and Hitachi. Despite this, the industry is not known for having a strong foundation in managing innovation. In a world that possesses complex physical capital and myriad intangible human assets, political forces and public opinion as well as technology are all required to get a plant to the point of producing power. Thus, more advanced management science could represent the missing piece of the puzzle for the nuclear power industry.

An advanced, decision-making framework can help utilities manage unpredictable events, increasing their ability to handle the planning and anticipated disruptions that often beset long, complex projects. By using advanced management science, the nuclear industry can take what it knows and create a learning environment to fi nd out more about what it doesn’t know, improving its odds for success.

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

Wind Energy: Balancing the Demand

In recent years, exponential demand for new U.S. wind energy-generating facilities has nearly doubled America’s installed wind generation. By the end of 2007, our nation’s total wind capacity stood at more than 16,000 megawatts (MW) – enough to power more than 4.5 million average American homes each year. And in 2007 alone, America’s new wind capacity grew 45 percent over the previous year – a record 5,244 MW of new projects and more new generating capacity than any other single electricity resource contributed in the same year. At the same time, wind-related employment nearly doubled in the United States during 2007, totaling 20,000 jobs. At more than $9 billion in cumulative investment, wind also pumped new life into regional economies hard hit by the recent economic downturn. [1]

The rapid development of wind installations in the United States comes in response to record-breaking demand driven by a confluence of factors: overwhelming consumer demand for clean, renewable energy; skyrocketing oil prices; power costs that compete with natural gas-fired power plants; and state legislatures that are competing to lure new jobs and wind power developments to their states. Despite these favorable conditions, the wind energy industry has been unable to meet America’s true demand for new wind energy-generating facilities. The barriers include the following: availability of key materials, the ability to manufacture large key components and the accessibility of skilled factory workers.

With the proper policies and related investments in infrastructure and workforce development, the United States stands to become a powerhouse exporter of wind power equipment, a wind technology innovator and a wind-related job creation engine. Escalating demand for wind energy is spurred by wind’s competitive cost against rising fossil fuel prices and mounting concerns over the environment, climate change and energy security.

Meanwhile, market trends and projections point to strong, continued demand for wind well into the future. Over the past decade, a similar surge in wind energy demand has taken place in the European Union (E.U.) countries. Wind power capacity there currently totals more than 50,000 MW, with projections that wind could provide at least 15 percent of the E.U.’s electricity by 2020 – amounting to an installed wind capacity of 180,000 MW and an estimated workforce of more than 200,000 people in wind power manufacturing, installation and maintenance jobs.

How is it, then, that European countries were able to secure the necessary parts and people while the United States fell short in its efforts on these fronts? After all, America has a bigger land mass and a larger, more high-quality wind resource than the E.U. countries. Indeed, the United States is already home to the world’s largest wind farms, including the 735-MW Horse Hollow Wind Energy Center in Texas, which generates power for about 230,000 average homes each year. What’s more, this country also has an extensive manufacturing base, a skilled labor pool and a pressing need to address energy and climate challenges.

So what’s missing? In short, robust national policy support – a prerequisite for strong, long-term investment in the sector. Such support would enable the industry to secure long lead-time materials and sufficient ramp-up to train and employ workers to continue wind power’s surging growth. Thus, the United States must rise to the occasion and assemble several key, interrelated puzzle pieces – policy, parts and people – if it’s to tap the full potential of wind energy.

POLICY: LONG-TERM SUPPORT AND INVESTMENT

In the United States, the federal government has played a key role in funding research and development, commercialization and large-scale deployment of most of the energy sources we rely on today. The oil and natural gas industry has enjoyed permanent subsidies and tax credits that date back to 1916 when Congress created the first tax breaks for oil and gas production. The coal industry began receiving similar support in 1932 with the passage of the first depletion allowances that enabled mining companies to deduct the value of coal removed from a mine from their taxable revenue.

Still in effect today, these incentives were designed to spur exploration and extraction of oil, gas and coal, and have since evolved to include such diverse mechanisms as royalty relief for resources developed on public lands; accelerated depreciation for investments in projects like pipelines, drilling rigs and refineries; and ongoing support for technology R&D and commercialization, such as the Department of Energy’s now defunct FutureGen program for coal research, its Deep Trek program for natural gas development and the VortexFlow SX tool for low-producing oil and gas wells.

For example, the 2005 energy bill passed by Congress provided more than $2 billion in tax relief for the oil and gas industry to encourage investment in exploration and distribution infrastructure. [2] The same bill also provided an expansion of existing support for coal, which in 2003 had a 10-year value of more than $3 billion. Similarly, the nuclear industry receives extensive support for R&D – the 2008 federal budget calls for more than $500 million in support for nuclear research – as well as federal indemnity that helps lower its insurance premiums. [3]

Over the past 15 years, the wind power industry has also enjoyed federal support, with a small amount of funding for R&D (the federal FY 2006 budget allotted $38 million for wind research) and the bulk of federal support taking the form of the Production Tax Credit (PTC) for wind power generation. The PTC has helped make wind energy more cost-competitive with other federally subsidized energy sources; just as importantly, its relatively routine renewal by Congress has created conditions under which market participants have grown accustomed to its effect on wind power finance.

However, in contrast to its consistent policies for coal, natural gas and nuclear power, Congress has never granted longterm approval to the wind power PTC. For more than a decade, in fact, Congress has failed to extend the PTC for longer than two years. And in three different years, the credit was allowed to expire with substantial negative consequences for the industry. Each year that the PTC has expired, major suppliers have had to, in the words of one senior wind power executive, “shut down their factories, lay off their people and go home.”

In 2000, 2002 and 2004, the expiration of the PTC sent wind development plummeting, with an almost complete collapse of the industry in 2000. If the PTC is allowed to expire at the end of 2008, American Wind Energy Associates (AWEA) estimates that as many as 75,000 domestic jobs could be lost as the industry slows production of turbines and power consumers reduce demand for new wind power projects.

The last three years have seen tenuous progress, with Congress extending the PTC for one and then two years; however, the wind industry is understandably concerned about these short-term extensions. Of significant importance is the corresponding effect a long-term or permanent extension of the PTC would have on the U.S. manufacturing sector and related investment activity. For starters, it would put the industry on an even footing with its competitors in the fossil fuels and nuclear industries. More importantly, it would send a clear signal to the U.S. manufacturing community that wind power is a solid, long-term investment.

PARTS: UNLEASHING THE NEXT MANUFACTURING BOOM

To fully grasp the trickle-down effects of an uncertain PTC on the wind power and related manufacturing industries, one must understand the industrial scale of a typical wind power development. Today’s wind turbines represent the largest rotating machinery in the world: a modern-day, 1.5-megawatt machine towers more than 300 feet above the ground with blades that out-span the wings of a 747 jetliner, and a typical utility-scale wind farm will include anywhere from 30 to 200 of these machines, planted in rows or staggered lines across the landscape.

The sheer size and scope of a utility-scale wind farm demands a sophisticated and established network of heavy equipment and parts manufacturers can fulfill orders in a timely fashion. Representing a familiar process for anyone who’s worked in a steel mill, forgery, gear-works or similar industrial facility, the manufacture of each turbine requires massive, rolled steel tubes for the tower; a variety of bearings and related components for lubricity in the drive shaft and hub; cast steel for housings and superstructure; steel forgings for shafts and gears; gearboxes for torque transmission; molded fiberglass, carbon fiber or hybrid blades; and electronic components for controls, monitoring and other functions.

U.S. manufacturers have extensive experience making all of these components for other end-use applications, and many have even succeeded in becoming suppliers to the wind industry. For example, Ameron International – a Pasadena, Calif.-based maker of industrial steel pipes, poles and related coatings – converted an aging heavy-steel fabrication plant in Fontana, Calif., to make wind towers. At 80 meters tall, 4.8 meters in diameter and weighing in at 200 tons, a wind tower requires large production facilities that have high up-front capital costs. By converting an existing facility, Ameron was able to capture a key and rapidly growing segment of the U.S. wind market in high-wind Western states while maintaining its position in other markets for its steel products.

Other manufacturers have also seen the opportunity that wind development presents and have taken similar steps. For example, Beaird Co. Ltd, a Shreveport, La.-based metal fabrication and machined parts manufacturer, supplies towers to the Midwest, Texas and Florida wind markets, as does DMI Industries from facilities in Fargo, N.D., and Tulsa, Okla.

But the successful conversion of existing manufacturing facilities to make parts for the wind industry belies an underlying challenge: investment in new manufacturing capacity to serve the wind industry is hindered by the lack of a clear policy framework. Even at wind’s current growth rates and with the resulting pent-up domestic demand for parts, the U.S. manufacturing sector is understandably reticent to invest in new production capacity.

The cause for this reticence is depicted graphically in Figure 1. With the stop-and-go nature of the PTC regarding U.S. wind development, and the consistent demand for their products in other end-use sectors, American manufacturers have strong disincentives to invest in new capital projects targeting the wind industry. It can take two to six years to build a new factory and 15 or more years to recapture the investment. The one- to two-year investment cycle of the U.S. wind industry is therefore only attractive to players who are comfortable with the risk and can manage wind as a marginal customer rather than an anchor tenant. This means that over the long haul, the United States could be legislating itself out of the “renewables” space, which arguably has a potential of several trillion dollars of global infrastructure.

The result in the marketplace: the United States ends up importing many of the large manufactured parts that go into a modern wind turbine – translating to a missed opportunity for domestic manufacturers that could be claiming a larger chunk of the underdeveloped U.S. wind market. As the largest consumer of electricity on earth, the United States also represents the biggest untapped market for wind power. At the end of 2007, with multiple successive years of 30 to 40 percent growth, wind power claimed just 1 percent of the U.S. electricity market. The raw potential for wind power in the United States is three times our total domestic consumption, according to the U.S. Energy Information Administration; if supply chain issues weren’t a problem, wind power could feasibly grow to supply as much as 20 to 30 percent of our $330 billion annual domestic electricity market. At 20 percent of domestic energy supply, the United States would need 300,000 MW of installed wind power capacity – an amount that would take 20 to 30 years of sustained manufacturing and development to achieve. But that would require growth well above our current pace of 4,000 to 5,000 MW annually – growth that simply isn’t possible given current supply constraints.

Of course, that’s just the U.S. market. Global wind development is set to more than triple by 2015, with cumulative installed capacity expected to rise from approximately 91 gigawatts (GW) by the end of 2007 to more than 290 GW by the end of 2015, according to forecasts by Emerging Energy Research (EER). Annual MW added for global wind power is expected to increase more than 50 percent, from approximately 17.5 GW in 2007 to more than 30 GW in 2015, according to EER’s forecasts. [4]

By offering the wind power industry the same long-term tax benefits enjoyed by other energy sources, Congress could trigger a wave of capital investment in new manufacturing capacity and turn the United States from a net importer of wind power equipment to a net exporter. But extending the PTC is not the final step: as much as any other component, a robust wind manufacturing sector needs skilled and dedicated people.

PEOPLE: RECLAIMING OUR MANUFACTURING ROOTS

In 2003, the National Association of Manufacturers released a study outlining many of the challenges facing our domestic manufacturing base. “Keeping America Competitive – how a Talent Shortage Threatens U.S. Manufacturing” highlights the loss of skilled manufacturing workers to foreign competitors, the problem of an aging workforce and a shift to a more urban, high tech economy and culture.

In particular, the study notes a number of “image” problems for the manufacturing industry. To wit: Among a geographically, ethnically and socio-economically diverse set of respondents – ranging from students, parents and teachers to policy analysts, public officials, union leaders, and manufacturing employees and executives – the sector’s image was found to be heavily loaded with negative connotations (and universally tied to the old “assembly line” stereotype) and perceived to be in a state of decline.

When asked to describe the images associated with a career in manufacturing, student respondents offered phrases such as “serving a life sentence,” being “on a chain gang” or a “slave to the line,” and even being a “robot.” Even more telling, most adult respondents said that people “just have no idea” of manufacturing’s contribution to the American economy.

The effect of this “sector fatigue” can be seen across the Rust Belt in the aging factories, retiring workforce and depressed communities being heavily impacted by America’s turn away from manufacturing. Wind power may be uniquely positioned to help reverse this trend. A growing number of America’s young people are concerned about environmental issues, such as pollution and global warming, and want to play a role in solving these problems. With the lure of good-paying jobs in an industry committed to environmental quality and poised for tremendous growth, wind power may provide an answer to manufacturers looking to lure and retain top talent.

We’ve already seen that you don’t need a large wind power resource in your state to enjoy the economic benefits of wind’s surging growth: whether it’s rolled steel from Louisiana and Oklahoma, gear boxes and cables from Wisconsin and New Hampshire, electronic components from Massachusetts and Vermont, or substations and blades from Ohio and Florida, the wind industry’s needs for manufactured parts – and the skilled labor that makes them – is massive, distributed and growing by the day.

UNLEASHING THE POWER OF EVOLUTION

The wind power industry offers a unique opportunity for revitalizing America’s manufacturing sector, creating vibrant job growth in currently depressed regions and tapping new export markets for American- made parts. For utilities and energy consumers, wind power provides a hedge against volatile energy costs and harvests one of our most abundant natural resources for energy security.

The time for wind power is now. As mankind has evolved, so too have our primary sources of energy: from the burning of wood and animal dung to whale oil and coal; to petroleum, natural gas and nuclear fuels; and (now) to wind turbines. The shift to wind power represents a natural evolution and progression that will provide both the United States and the world with critical economic, environmental and technological solutions. As energy technologies continue to evolve and mature, wind power will soon be joined by solar power, ocean current power and even hydrogen as cost-competitive solutions to our pressing energy challenges.

ENDNOTES

1. “American Wind Energy Association 2007 Market Report” (January 2008). www.awea.org/Market_Report_Jan08.pdf

2. Energy Policy Act of 2005, Section 1323-1329. www.citizen.org/documents/energyconferencebill0705.pdf

3. Aileen Roder, “An Overview of Senate Energy Bill Subsidies to the Fossil Fuel Industry” (2003), Taxpayers for Common Sense website. www.taxpayer.net/greenscissors/LearnMore/senatefossilfuelsubsidies.htm

4. “Report: global Wind Power Base Expected to Triple by 2015” (November 2007), North American Windpower. www.nawindpower.com/naw/e107_plugins/content/content_lt.php?content.1478

Enhancing Energy Efficiency and Security for Sustainable Development

The United States Energy Association (USEA) is a private, nongovernmental organization that functions as the U.S. member committee of the World Energy Council (WEC), the foremost international organization focused on the production and utilization of energy. With members in more than 100 countries, the mission of the WEC, and correspondingly the USEA, has been to promote the sustainable supply and use of energy for the greatest benefit of all people.

The World Energy Council’s flagship is the WEC Congress, which meets every three years. The Congress helps establish how the global energy community looks at the world as well as how we impact that world. When the United States had the privilege of hosting the global energy community 10 years ago in Houston, it promoted the following theme: “Energy and Technology: Sustaining Global Development into the Next Millennium.” The most recent Congress, which took place in Italy in November of last year, centered on “The Energy Future in an Interdependent World.” One can easily see how the WEC’s combined objectives of energy efficiency and energy security – particularly in the context of collaborative action to mitigate climate change – have become critical global issues.

KEY CONCERNS

Efficiency, security and climate are being emphasized in WEC scenarios that project key global energy concerns to the year 2050. The critical factors that will drive energy issues into the future will include the following:

  • Technology;
  • Markets;
  • Sustainability; and
  • Interdependence.

It’s clear that we need to advance research into and development of energy sources; however, it’s even more urgent that we support the demonstration and deployment of advanced clean energy technologies. Currently, policymakers are paying considerable attention to consumer use of energy in buildings and transportation, and they are evaluating alternative technologies to meet these consumer demands. Equally important but often overlooked are the advances our industry has made, and hopefully will continue to make, in energy efficiency through technological improvements in production.

Research from the Electric Power Research Institute indicates that coal-fired electric power plants that achieve a 2 percent gain in efficiency can yield a carbon dioxide (CO2) reduction of 5 percent. Hence, if we can move the rating of the global coal-fired power fleet from about 30 percent efficiency to 40 percent, we can realize a CO2 reduction of 25 percent. And this is without carbon capture and storage.

It’s also critically important for energy technology deployment to address the nontechnical barriers to advancing clean energy technologies. Barriers to energy efficiency and energy services trade need to be discussed by the World Trade Organization, since robust trade is essential to ensuring that energy-efficiency technologies cross borders freely. Trade barriers such as tariffs, taxes, customs and import fees need to be eliminated. As World Energy Council Secretary General Gerald Doucet recently pointed out in the International Herald Tribune, “A recent U.S. and EU proposal calling for the elimination of tariffs on a list of 43 environmentally friendly products shows how support is building for a trade-based approach to climate mitigation.”

Perhaps most importantly, the global community must address the issue of the cost of advanced, clean energy technology. Trade barriers, capacity building, tariff reform and other issues can be overcome. However, if we refuse to recognize that advanced clean energy technology will cost more and make energy prices rise for the end-user, we’re refusing to address the real issues – namely, who will pay the incremental cost of advanced technology, and will it be the economically deprived end-user in a developing country?

This is not to say that the non-financial barriers to sustainable energy development are unimportant. Collectively, we still need increased focus on enforcement of contracts, protection of intellectual property, rule of law, protection of assets from seizure and the range of requirements needed to provide incentives for capital, especially foreign investment.

however, markets can only do so much; markets are imperfect, and market failures occur. Coordinated global cooperation – among governments and between governments and the private sector – is critical, particularly to address efficiency, security and climate concerns.

SUSTAINABLE REALITIES

Sustainability remains an elusive goal for many, because it’s not particularly clear how to go about both growing economies and protecting the planet for future generations. What is clear is that climate change must be addressed in an approach that is practical, economic and achievable. For our industry, achievable policy includes political realities. All industries are affected by domestic politics, but in most countries, the energy industry is dramatically influenced by local political concerns.

The move toward sustainability will also have an impact on the 1.5 billion people without access to commercial energy and the 1.5 billion with inadequate access. hopefully, no one believes that sustainability means denying the benefits of modern society to those who are unserved or under-served today. We must find ways to work toward ending economic and energy poverty for hundreds of millions of people around the globe. This calls for new approaches that continue to allow economic development while addressing both local environmental issues and global issues such as climate change.

AN INTERDEPENDENT WORLD

The concept of energy interdependence helps us recognize that very few nations are today – or ever will be – truly “energy independent.” Much of the rhetoric regarding the energy independence of the United States and other nations is, in fact, vague and not based on reality. Thus, it’s critical to expose this fantasy for what it is: wishful thinking. Interdependence is the ally, not the enemy, of energy security.

As Rex Tillerson, chairman and CEO of Exxon-Mobil, pointed out in his keynote address to the World Energy Congress in Rome in November 2007, the world needs to avoid “the danger of resources nationalism.” he also stressed the need to “ensure that the global energy markets and international partnerships do not fall apart.” In the United States in 2008, domestic consumption will continue to exceed domestic production. We will import more petroleum (about 60 percent of our petroleum is now imported) and increasingly more natural gas.

WORKING TOWARD A SUSTAINABLE FUTURE

Construction of critical energy supply infrastructure presents a huge challenge. As we begin 2008 in the United States, it’s critical that we recognize that all energy supply options – coal, nuclear, natural gas, petroleum and renewable – have severe constraints. This recognition must lead us to declare energy efficiency as Priority No. 1 for energy and economic security, and climate mitigation.

While we have done much in the United States to pursue efficiency, we still need to do more, including:

  • Increasing the utilization of combined heat and power applications;
  • Further improving efficiency standards;
  • Improving land use and transportation planning;
  • Providing incentives for efficiency investments; and
  • Decoupling regulated utility returns from sales.

On an international level, we must continue to:

  • Pursue energy efficiency in both supply and demand (increasing both end-use efficiency and production efficiency);
  • Decarbonize electricity (moving toward emissions-free power by mid-century);
  • Contain growth in transportation emissions and develop carbon-free alternatives; and
  • Support major collaborative efforts on technology development and deployment such as Asia-Pacific Partnership on Clean Development and Climate, International Partnership for the hydrogen Economy, Carbon Sequestration Leadership Forum, and Major Economies Process for Energy Security and Climate Change.

The trilateral issues of energy efficiency, energy security and climate change are reflected in all of our international partnerships. Nevertheless, much more international collaboration will be needed to speed the deployment of energy efficiency technologies.

As we think about energy efficiency, security and climate, it’s critical for us to remember the following:

  • No single source, technology, policy or strategy can meet the challenges we face. All energy options should be left on the table. No “one size fits all” solution exists.
  • No single approach will work everywhere. Different measures will be useful, and each economy or nation will consider the options that work for them. A range of measures is available, and actions must be selected that are appropriate to each circumstance.

The key for the global community will be to encourage each sovereign economy to put in place policies that support longterm investment in clean energy technology. International cooperation among governments, and between governments and the private sector, is essential. The focal points of international cooperation should stress energy efficiency (in both supply and demand), decarbonizing electric power (while recognizing that the world will continue to rely on fossil fuels, particularly coal for power generation) and reducing the growth – and eventually the level – of emissions from transportation.

Finally, but perhaps most importantly, we must continue to push for a coordinated, international effort in advanced technology demonstration and deployment. The international partnerships cited early are useful tools, but we can and must do more.

Anthropogenic Global Warming: Some First-Order Questions

Today, there’s growing sentiment among members of the U.S. Congress that it must do something to confront the possibility of catastrophic CO2-driven climate change. With this in mind, American corporations have begun taking steps to minimize the adverse financial impact of any actions Congress and the U.S. Environmental Protection Agency might take to address anthropogenic global warming (AGW) – this despite the fact that there continues to be controversy within the scientific community over the degree to which anthropogenic carbon emissions are actually contributing to global warming.

There are several first-order questions about climate change that still lack unique, definitive answers. These include the following:

  • What is the ideal global average temperature?
  • What is the ideal atmospheric CO2 concentration?
  • By what percentage must AGW emissions be reduced to stabilize atmospheric CO2 concentrations?
  • Over what time period must that reduction occur?
  • Who will convince all global emitters to do what must be done?

While these questions may sound trivial, their answers would be crucial to defining any serious effort to halt or reverse global warming caused by anthropogenic carbon emissions, as well as to halt or reverse the accumulation of CO2 in the atmosphere.

WHAT IS THE IDEAL GLOBAL AVERAGE TEMPERATURE?

As you will see in Figure 1, over the past 4,500 years, the global average temperature has hovered around 57 degrees Fahrenheit, plus or minus 2.5 degrees. Since humans have thrived throughout this period, it seems reasonable to assume that the ideal global temperature falls either within or quite close to this range.

Environmentalists, however, have expressed concern that the current rise of approximately 1 degree over the longterm average temperature may be a result of a different mechanism than previous increases. And this in turn, they believe, could drive the average global temperature beyond the recent range … with potentially catastrophic results. There have even been discussions of a “tipping point” beyond which a return to the ideal temperature (or temperature range) might not be possible.

Since the science of global warming is now considered “settled,” it should be possible to definitively identify the ideal global average temperature (or temperature range).

WHAT IS THE IDEAL ATMOSPHERIC CO2 CONCENTRATION?

In 1900, the atmospheric concentration of CO2 was approximately 270 parts per million by volume (ppmv). Since then, the atmospheric concentration of CO2 has increased to approximately 380 ppmv – an increase attributed entirely to the emissions of anthropogenic CO2. In addition, analyses of anthropogenic CO2 emissions since 1900 suggest that not all of the anthropogenic CO2 released into the atmosphere has remained there. Instead, a significant portion has been absorbed (largely by the oceans), thus moderating the increase in atmospheric concentrations. Much of the discussion about a tipping point is built on the fear that progressive increases in temperature will cause the oceans to absorb and hold progressively less incremental CO2 – the key concern being that warmer oceans could stop absorbing incremental CO2 altogether and even release some of the CO2 they currently hold. This in turn would cause the atmospheric CO2 concentration and the average global temperature to increase rapidly and dramatically.

Since both atmospheric CO2 concentration and global average temperature have increased progressively (though not uniformly) over the period, it seems reasonable to assume that the ideal atmospheric CO2 concentration is approximately 270 ppmv. This, however, presents a major challenge since to meet that goal, we would not only have to halt the increase in atmospheric CO2 concentrations but also remove approximately 30 percent of the CO2 currently held in the global atmosphere.

BY WHAT PERCENTAGE MUST ANTHROPOGENIC CO2 EMISSIONS BE REDUCED TO ACHIEVE THE IDEAL?

Studies suggest that the Earth’s oceans absorb approximately one-third of the annual anthropogenic carbon dioxide emissions. Assuming this is true, global anthropogenic CO2 emissions would have to be reduced by at least 66 percent to avoid further accumulation of anthropogenic CO2 in the atmosphere. However, since the absorption of CO2 in the oceans is a function of the CO2 concentration gradient between the atmosphere and the oceans, it would actually take a larger reduction in CO2 emissions to ensure that atmospheric concentrations did not further increase over time. Reducing anthropogenic CO2 emissions to zero would – all other things being equal – prevent further accumulation of CO2 in the atmosphere. However, as mentioned above, if the ideal atmospheric CO2 concentration is approximately 270 ppmv, not only would anthropogenic CO2 emissions have to be totally eliminated but a substantial effort would be required to reduce current atmospheric CO2 concentrations to approximately 270 ppmv.

Although technology exists to remove CO2 from the atmosphere, its application is impractical as long as combustion processes continue to produce additional CO2. Once anthropogenic CO2 emissions were reduced to zero, however, it would be possible to begin reducing atmospheric CO2 concentrations via extraction techniques. Sir Richard Branson has offered a $25 million prize for the development and demonstration of a more economically practical method for removing excess CO2. However, it appears highly unlikely that this would be feasible as long as additional atmospheric carbon dioxide continues to be generated by terrestrial combustion.

OVER WHAT TIME PERIOD MUST THIS REDUCTION OCCUR?

The increase of nearly 110 ppmv in ambient CO2 concentrations over the past century suggests that anthropogenic CO2 emissions would have to be reduced to zero over a period of about 50 years to prevent atmospheric CO2 concentrations from reaching levels of approximately 450 ppmv. Attempting to reduce anthropogenic CO2 emissions to zero over any substantially shorter period would likely result in massive economic dead loss, since many facilities currently emitting CO2 would have to be removed from service before the end of their economic lives. Shortening the time period would also increase the pain and dislocations associated with the massive economic changes involved in the transition.

On the other end of the spectrum, extending the period over which CO2 emissions would be eliminated beyond 50 years would result in a higher maximum atmospheric concentration – making the challenge of removing the excess CO2 even more difficult (an effect that could be offset to some degree by “frontloading” reductions to achieve maximum early impact).

WHO WILL CONVINCE GLOBAL EMITTERS TO DO WHAT’S NECESSARY?

This is by far the most difficult question because unlike the others (which are technical in nature), this one is political. Hinging as it does on the willingness of national leaders to make substantial investments in non-emitting technologies, it raises a number of issues. In some countries, for example, implementing meaningful CO2 emissions controls could mean delaying economic development. The impact of such a transition – particularly in developing economies – would depend on the commercial readiness and relative economics of the non-emitting technologies.

In the case of electric power generation, well-established commercial technology is available to replace existing facilities and meet the growing power needs of developing countries. These technologies include large-scale hydroelectric dams, geothermal and nuclear generation, and solar, wind and small hydro facilities. Keep in mind, however, that the contributions of solar and wind generation are limited by the intermittency of these power sources and the lack of economical, efficient, large-scale electric storage technology.

In the case of transportation, the prospects for carbon control are far less clear. Electric vehicles have an extremely limited range. And while current technology enables synthetic fuels such as ethanol and bio-diesel to be produced in volume, such fuels are still far more expensive than the fossil fuels they would replace. hydrogen also represents a possible “energy carrier” for transportation; however, its conventional production requires huge amounts of electricity, and renewable technologies for hydrogen production are not yet commercially available. In addition, a transition to hydrogen as a vehicle fuel would require the development of an entirely new production, transmission and distribution infrastructure.

SUMMARY AND CONCLUSIONS

The world community’s efforts to stem the increase of anthropogenic CO2 in the atmosphere – and thus bring an end to anthropogenic global warming – have to date been ill conceived and poorly executed. As a global issue, climate change requires a global solution; anything less is doomed to failure. Yet the Kyoto Accords do not represent a global solution, nor would any effort that the U.S. Congress legislated or the Environmental Protection Agency imposed.

A solution to anthropogenic global warming requires definitive answers to the first four questions posed above – answers that would serve as the goals of a global effort to control anthropogenic climate change. Once goals have been established and the technologies required to replace fossil fuel consumption have been evaluated, we would have the basis on which to develop a plan for reducing global CO2 emissions over the required time frame. If such a plan is to succeed, it must be adopted by every nation on the globe, and each nation’s compliance must be documented and verified throughout the reduction time frame.

Based on conservative estimates of the cost of building a nuclear generation infrastructure to replace existing U.S. fossil fuel generation and meet the needs of an expanding U.S. population, I’ve previously estimated that this country would need to invest approximately $10 trillion to achieve a 95 percent reduction in CO2 emissions. That investment requirement, however, could grow to as much as $40 trillion under a business-as-usual legal and regulatory scenario. The global investment required to eliminate anthropogenic carbon emissions would thus be on the order of $40 trillion to $100 trillion by the middle of this century. Keep in mind, however, that these estimates are based on currently available technology. If more advanced technologies were to become available in the power generation, transportation equipment or transportation fuels sectors, those investment requirements could be reduced substantially.

The massive nature and scope of the changes required to achieve zero anthropogenic CO2 emissions globally, combined with the large-scale investments required to achieve such a goal, suggest the need for a massive research, development, demonstration and deployment (RDD&D) effort to develop technologies for executing the plan and achieving its goals with the minimum possible investment and economic disruption. Much of the RDD&D currently being pursued in this regard, however, is not focused on the path to a zero-emissions future – which means that if a zero-emissions future is our true goal, this RDD&D is a waste of time, money and effort because even if successful, it would be inadequate to accomplish the goal. The same can be said of much of the investment currently being allocated or planned to marginally reduce emissions below current levels.

The five questions raised above may seem silly in their simplicity; however, they are fundamental. The absence of unique, definitive answers to those questions at this stage of the discussion of global climate change is both unbelievable and unacceptable. Legislating or regulating on a national basis regarding global climate change, in the absence of unique, definitive answers to these questions, is irresponsible at best and grossly negligent at worst.

Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.

CATEGORIES AND RELATIONSHIPS

There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.

AVAILABLE TOOLS

Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.

THE GOAL OF IMPLEMENTING ANALYTICS

Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.

THE IMPORTANCE OF STANDARDS

The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.

CONCLUSION

To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.

Growing (or Shrinking) Trends in Nuclear Power Plant Construction

Around the world, the prospects for nuclear power generation are increasing – opportunities made clear by the number of currently under-construction nuclear plants that are smaller than those currently in the limelight. Offering advantages in certain situations, these smaller plants can more readily serve smaller grids as well as be used for distributed generation (with power plants located close to the demand centers and the main grid providing back-up). Smaller plants are also easier to finance, particularly in countries that are still in the early days of their nuclear power programs.

In recent years, development and licensing efforts have focused primarily on large, advanced reactors, due to their economies of scale and obvious application to developed countries with substantial grid infrastructure. Meanwhile, the wide scope for smaller nuclear plants has received less attention. However, of the 30 or more countries that are moving toward implementing nuclear power programs, most are likely to be looking initially for units under 1,000 MWe, and some for units of less than half that amount.

EXISTING DESIGNS

With that in mind, let’s take a look at some of the current designs.

There are many plants under 1,000 MWe now in operation, even if their replacements tend to be larger. (In 2007 four new units were connected to the grid – two large ones, one 202-MWe unit and one 655-MWe unit.) In addition, some smaller reactors are either on offer now or likely to be available in the next few years.

Five hundred to 700 MWe. There are several plants in this size range, including Westinghouse AP600 (which has U.S. design certification) and the Canadian Candu-6 (being built in Romania). In addition, China is building two CNP-600 units at Qinshan but does not plan to build any more of them. In Japan, Hitachi-GE has completed the design of a 600-MWe version of its 1,350-MWe ABWR, which has been operating for 10 years.

Two hundred and fifty to 500 MWe. And finally, in the 250- to 500-MWe category (output that is electric rather than heat), there are a few designs pending but little immediately on offer.

IRIS. Being developed by an international team led by Westinghouse in the United States, IRIS – or, more formally, International Reactor Innovative and Secure – is an advanced third-generation modular 335-MWe pressurized water reactor (PWR) with integral steam generators and a primary coolant system all within the pressure vessel. U.S. design certification is at pre-application stage with a view to final design approval by 2012 and deployment by 2015 to 2017.

VBER-300 PWR. This 295- to 325-MWe unit from Russia was designed by OKBM based on naval power plants and is now being developed as a land-based unit with the state-owned nuclear holding company Kazatomprom, with a view to exporting it. The first two units will be built in Southwest Kazakhstan under a Russian-Kazakh joint venture.

VK-300. This Russian-built boiling water reactor is being developed for co-generation of both power and district heating or heat for desalination (150 MWe plus 1675 GJ/hr) by the nuclear research and development organization NIKIET. The unit evolved from the VK-50 BWR at Dimitrovgrad but uses standard components from larger reactors wherever possible. In September 2007, it was announced that six of these units would be built at Kola and at Primorskaya in Russia’s far east, to start operating between 2017 and 2020.

NP-300 PWR. Developed in France from submarine power plants and aimed at export markets for power, heat and desalination, this Technicatome (Areva)- designed reactor has passive safety systems and can be built for applications of from 100 to 300 MWe.

China is also building a 300-MWe PWR (pressurized water reactor) nuclear power plant in Pakistan at Chasma (alongside another that started up in 2000); however, this is an old design based on French technology and has not been offered more widely. The new unit is expected to come online in 2011.

One hundred to 300 MWe. This category includes both conventional PWR and high-temperature gas-cooled reactors (HTRs); however, none in the second category are being built yet. Argentina’s CAREM nuclear power plant is being developed by CNEA and INVAP as a modular 27-MWe simplified PWR with integral steam generators designed to be used for electricity generation or for water desalination.

FLOATING PLANTS

After many years of promoting the idea, Russia’s state-run atomic energy corporation Rosatom has approved construction of a nuclear power plant on a 21,500-ton barge to supply 70 MWe of power plus 586 GJ/hr of heat to Severodvinsk, in the Archangelsk region of Russia. The contract to build the first unit was let by nuclear power station operator Rosenergoatom to the Sevmash shipyard in May 2006. Expected to cost $337 million (including $30 million already spent in design), the project is 80 percent financed by Rosenergoatom and 20 percent financed by Sevmash. Operation is expected to begin in mid-2010.

Rosatom is planning to construct seven additional floating nuclear power plants, each (like the initial one) with two 35- MWe OKBM KLT-40S nuclear reactors. Five of these will be used by Gazprom – the world’s biggest extractor of natural gas – for offshore oil and gas field development and for operations on Russia’s Kola and Yamal Peninsulas. One of these reactors is planned for 2012 commissioning at Pevek on the Chukotka Peninsula, and another is planned for the Kamchatka region, both in the far east of the country. Even farther east, sites being considered include Yakutia and Taimyr. Electricity cost is expected to be much lower than from present alternatives. In 2007 an agreement was signed with the Sakha Republic (Yakutia region) to build a floating plant for its northern parts, using smaller ABV reactors.

OTHER DESIGNS

On a larger scale, South Korea’s SMART is a 100-MWe PWR with integral steam generators and advanced safety features. It is designed to generate electricity and/or thermal applications such as seawater desalination. Indonesia’s national nuclear energy agency, Batan, has undertaken a pre-feasibility study for a SMART reactor for power and desalination on Madura Island. However, this awaits the building of a reference plant in Korea.

There are three high-temperature, gas-cooled reactors capable of being used for power generation, but much of the development impetus has been focused on the thermo-chemical production of hydrogen. Fuel for the first two consists of billiard ball-size pebbles that can withstand very high temperatures. These aim for a step-change in safety, economics and proliferation resistance.

China’s 200-MWe HTR-PM is based on a well-tested small prototype, and a two-module plant is due to start construction at Shidaowan in Shandong province in 2009. This reactor will use the conventional steam cycle to generate power. Start-up is scheduled for 2013. After the demonstration plant, a power station with 18 modules is envisaged.

Very similar to China’s plant is South Africa’s Pebble Bed Modular Reactor (PBMR), which is being developed by a consortium led by the utility Eskom. Production units will be 165 MWe. The PBMR will have a direct-cycle gas turbine generator driven by hot helium. The PBMR Demonstration unit is expected to start construction at Koeberg in 2009 and achieve criticality in 2013.

Both of these designs are based on earlier German reactors that have some years of operational experience. A U.S. design, the Modular helium Reactor (GT-MHR), is being developed in Russia; in its electrical application, each unit would directly drive a gas turbine giving 280 MWe.

These three designs operate at much higher temperatures than ordinary reactors and offer great potential as sources of industrial heat, including for the thermo-chemical production of hydrogen on a large scale. Much of the development thinking going into the PBMR has been geared to synthetic oil production by Sasol (South African Coal and Oil).

MODULAR CONSTRUCTION

The IRIS developers have outlined the economic case for modular construction of their design (about 330 MWe), and it’s an argument that applies similarly to other smaller units. These developers point out that IRIS, with its moderate size and simple design, is ideally suited for modular construction. The economy of scale is replaced here with the economy of serial production of many small and simple components and prefabricated sections. They expect that construction of the first IRIS unit will be completed in three years, with subsequent production taking only two years.

Site layouts have been developed with multiple single units or multiple twin units. In each case, units will be constructed with enough space around them to allow the next unit to be constructed while the previous one is operating and generating revenue. And even with this separation, the plant footprint can be very compact: a site with three IRIS single modules providing 1000 MWe is similar to or smaller in size than one with a comparable total power single unit.

Eventually, IRIS’ capital and production costs are expected to be comparable to those of larger plants. however, any small unit offers potential for a funding profile and flexibility impossible to achieve with larger plants. As one module is finished and starts producing electricity, it will generate positive cash fl ow for the construction of the next module. Westinghouse estimates that 1,000 MWe delivered by three IRIS units built at three-year intervals financed at 10 percent for 10 years requires a maximum negative cash flow of less than $700 million (compared with about three times that for a single 1,000-MWe unit). For developed countries, small modular units offer the opportunity of building as necessary; for developing countries, smaller units may represent the only option, since such country’s electric grids are likely unable to take 1,000-plus- MWe single units.

Distributed generation. The advent of reactors much smaller than those being promoted today means that reactors will be available to serve smaller grids and to be put into use for distributed generation (with power plants close to the demand centers and the main grid used for back-up). This does not mean, however, that large units serving national grids will become obsolete – as some appear to wish.

WORLD MARKET

One aspect of the global Nuclear Energy Partnership program is international deployment of appropriately sized reactors with desirable designs and operational characteristics (some of which include improved economics, greater safety margins, longer operating cycles with refueling intervals of up to three years, better proliferation resistance and sustainability). Several of the designs described earlier in this paper are likely to meet these criteria.

IRIS itself is being developed by an international team of 20 organizations from ten countries (Brazil, Croatia, Italy, Japan, Lithuania, Mexico, Russia, Spain, the United Kingdom and the United States) on four continents – a clear demonstration of how reactor development is proceeding more widely.

Major reactor designers and vendors are now typically international in character and marketing structure. To wit: the United Kingdom’s recent announcement that it would renew its nuclear power capacity was anticipated by four companies lodging applications for generic design approval – two from the United States (each with Japanese involvement), one from Canada and one from France (with German involvement). These are all big units, but in demonstrating the viability of late third-generation technology, they will also encourage consideration of smaller plants where those are most appropriate.

Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.