The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.

Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.

Growing (or Shrinking) Trends in Nuclear Power Plant Construction

Around the world, the prospects for nuclear power generation are increasing – opportunities made clear by the number of currently under-construction nuclear plants that are smaller than those currently in the limelight. Offering advantages in certain situations, these smaller plants can more readily serve smaller grids as well as be used for distributed generation (with power plants located close to the demand centers and the main grid providing back-up). Smaller plants are also easier to finance, particularly in countries that are still in the early days of their nuclear power programs.

In recent years, development and licensing efforts have focused primarily on large, advanced reactors, due to their economies of scale and obvious application to developed countries with substantial grid infrastructure. Meanwhile, the wide scope for smaller nuclear plants has received less attention. However, of the 30 or more countries that are moving toward implementing nuclear power programs, most are likely to be looking initially for units under 1,000 MWe, and some for units of less than half that amount.

EXISTING DESIGNS

With that in mind, let’s take a look at some of the current designs.

There are many plants under 1,000 MWe now in operation, even if their replacements tend to be larger. (In 2007 four new units were connected to the grid – two large ones, one 202-MWe unit and one 655-MWe unit.) In addition, some smaller reactors are either on offer now or likely to be available in the next few years.

Five hundred to 700 MWe. There are several plants in this size range, including Westinghouse AP600 (which has U.S. design certification) and the Canadian Candu-6 (being built in Romania). In addition, China is building two CNP-600 units at Qinshan but does not plan to build any more of them. In Japan, Hitachi-GE has completed the design of a 600-MWe version of its 1,350-MWe ABWR, which has been operating for 10 years.

Two hundred and fifty to 500 MWe. And finally, in the 250- to 500-MWe category (output that is electric rather than heat), there are a few designs pending but little immediately on offer.

IRIS. Being developed by an international team led by Westinghouse in the United States, IRIS – or, more formally, International Reactor Innovative and Secure – is an advanced third-generation modular 335-MWe pressurized water reactor (PWR) with integral steam generators and a primary coolant system all within the pressure vessel. U.S. design certification is at pre-application stage with a view to final design approval by 2012 and deployment by 2015 to 2017.

VBER-300 PWR. This 295- to 325-MWe unit from Russia was designed by OKBM based on naval power plants and is now being developed as a land-based unit with the state-owned nuclear holding company Kazatomprom, with a view to exporting it. The first two units will be built in Southwest Kazakhstan under a Russian-Kazakh joint venture.

VK-300. This Russian-built boiling water reactor is being developed for co-generation of both power and district heating or heat for desalination (150 MWe plus 1675 GJ/hr) by the nuclear research and development organization NIKIET. The unit evolved from the VK-50 BWR at Dimitrovgrad but uses standard components from larger reactors wherever possible. In September 2007, it was announced that six of these units would be built at Kola and at Primorskaya in Russia’s far east, to start operating between 2017 and 2020.

NP-300 PWR. Developed in France from submarine power plants and aimed at export markets for power, heat and desalination, this Technicatome (Areva)- designed reactor has passive safety systems and can be built for applications of from 100 to 300 MWe.

China is also building a 300-MWe PWR (pressurized water reactor) nuclear power plant in Pakistan at Chasma (alongside another that started up in 2000); however, this is an old design based on French technology and has not been offered more widely. The new unit is expected to come online in 2011.

One hundred to 300 MWe. This category includes both conventional PWR and high-temperature gas-cooled reactors (HTRs); however, none in the second category are being built yet. Argentina’s CAREM nuclear power plant is being developed by CNEA and INVAP as a modular 27-MWe simplified PWR with integral steam generators designed to be used for electricity generation or for water desalination.

FLOATING PLANTS

After many years of promoting the idea, Russia’s state-run atomic energy corporation Rosatom has approved construction of a nuclear power plant on a 21,500-ton barge to supply 70 MWe of power plus 586 GJ/hr of heat to Severodvinsk, in the Archangelsk region of Russia. The contract to build the first unit was let by nuclear power station operator Rosenergoatom to the Sevmash shipyard in May 2006. Expected to cost $337 million (including $30 million already spent in design), the project is 80 percent financed by Rosenergoatom and 20 percent financed by Sevmash. Operation is expected to begin in mid-2010.

Rosatom is planning to construct seven additional floating nuclear power plants, each (like the initial one) with two 35- MWe OKBM KLT-40S nuclear reactors. Five of these will be used by Gazprom – the world’s biggest extractor of natural gas – for offshore oil and gas field development and for operations on Russia’s Kola and Yamal Peninsulas. One of these reactors is planned for 2012 commissioning at Pevek on the Chukotka Peninsula, and another is planned for the Kamchatka region, both in the far east of the country. Even farther east, sites being considered include Yakutia and Taimyr. Electricity cost is expected to be much lower than from present alternatives. In 2007 an agreement was signed with the Sakha Republic (Yakutia region) to build a floating plant for its northern parts, using smaller ABV reactors.

OTHER DESIGNS

On a larger scale, South Korea’s SMART is a 100-MWe PWR with integral steam generators and advanced safety features. It is designed to generate electricity and/or thermal applications such as seawater desalination. Indonesia’s national nuclear energy agency, Batan, has undertaken a pre-feasibility study for a SMART reactor for power and desalination on Madura Island. However, this awaits the building of a reference plant in Korea.

There are three high-temperature, gas-cooled reactors capable of being used for power generation, but much of the development impetus has been focused on the thermo-chemical production of hydrogen. Fuel for the first two consists of billiard ball-size pebbles that can withstand very high temperatures. These aim for a step-change in safety, economics and proliferation resistance.

China’s 200-MWe HTR-PM is based on a well-tested small prototype, and a two-module plant is due to start construction at Shidaowan in Shandong province in 2009. This reactor will use the conventional steam cycle to generate power. Start-up is scheduled for 2013. After the demonstration plant, a power station with 18 modules is envisaged.

Very similar to China’s plant is South Africa’s Pebble Bed Modular Reactor (PBMR), which is being developed by a consortium led by the utility Eskom. Production units will be 165 MWe. The PBMR will have a direct-cycle gas turbine generator driven by hot helium. The PBMR Demonstration unit is expected to start construction at Koeberg in 2009 and achieve criticality in 2013.

Both of these designs are based on earlier German reactors that have some years of operational experience. A U.S. design, the Modular helium Reactor (GT-MHR), is being developed in Russia; in its electrical application, each unit would directly drive a gas turbine giving 280 MWe.

These three designs operate at much higher temperatures than ordinary reactors and offer great potential as sources of industrial heat, including for the thermo-chemical production of hydrogen on a large scale. Much of the development thinking going into the PBMR has been geared to synthetic oil production by Sasol (South African Coal and Oil).

MODULAR CONSTRUCTION

The IRIS developers have outlined the economic case for modular construction of their design (about 330 MWe), and it’s an argument that applies similarly to other smaller units. These developers point out that IRIS, with its moderate size and simple design, is ideally suited for modular construction. The economy of scale is replaced here with the economy of serial production of many small and simple components and prefabricated sections. They expect that construction of the first IRIS unit will be completed in three years, with subsequent production taking only two years.

Site layouts have been developed with multiple single units or multiple twin units. In each case, units will be constructed with enough space around them to allow the next unit to be constructed while the previous one is operating and generating revenue. And even with this separation, the plant footprint can be very compact: a site with three IRIS single modules providing 1000 MWe is similar to or smaller in size than one with a comparable total power single unit.

Eventually, IRIS’ capital and production costs are expected to be comparable to those of larger plants. however, any small unit offers potential for a funding profile and flexibility impossible to achieve with larger plants. As one module is finished and starts producing electricity, it will generate positive cash fl ow for the construction of the next module. Westinghouse estimates that 1,000 MWe delivered by three IRIS units built at three-year intervals financed at 10 percent for 10 years requires a maximum negative cash flow of less than $700 million (compared with about three times that for a single 1,000-MWe unit). For developed countries, small modular units offer the opportunity of building as necessary; for developing countries, smaller units may represent the only option, since such country’s electric grids are likely unable to take 1,000-plus- MWe single units.

Distributed generation. The advent of reactors much smaller than those being promoted today means that reactors will be available to serve smaller grids and to be put into use for distributed generation (with power plants close to the demand centers and the main grid used for back-up). This does not mean, however, that large units serving national grids will become obsolete – as some appear to wish.

WORLD MARKET

One aspect of the global Nuclear Energy Partnership program is international deployment of appropriately sized reactors with desirable designs and operational characteristics (some of which include improved economics, greater safety margins, longer operating cycles with refueling intervals of up to three years, better proliferation resistance and sustainability). Several of the designs described earlier in this paper are likely to meet these criteria.

IRIS itself is being developed by an international team of 20 organizations from ten countries (Brazil, Croatia, Italy, Japan, Lithuania, Mexico, Russia, Spain, the United Kingdom and the United States) on four continents – a clear demonstration of how reactor development is proceeding more widely.

Major reactor designers and vendors are now typically international in character and marketing structure. To wit: the United Kingdom’s recent announcement that it would renew its nuclear power capacity was anticipated by four companies lodging applications for generic design approval – two from the United States (each with Japanese involvement), one from Canada and one from France (with German involvement). These are all big units, but in demonstrating the viability of late third-generation technology, they will also encourage consideration of smaller plants where those are most appropriate.

The Power of Prediction: Improving the Odds of a Nuclear Renaissance

After 30 years of disfavor in the United States, the nuclear power industry is poised for resurgence. With the passage of the Energy Policy Act of 2005, the specter of over $100 per barrel oil prices and the public recognition that global warming is real, nuclear power is now considered one of the most practical ways to clean up the power grid and help the United States reduce its dependence on foreign oil. The industry has responded with a resolve to build a new fleet of nuclear plants in anticipation of what has been referred to as a nuclear renaissance.

The nuclear power industry is characterized by a remarkable level of physics and mechanical science. Yet, given the confluence of a number of problematic issues – an aging workforce, the shortage of skilled trades, the limited availability of equipment and parts, and a history of late, over-budget projects – questions arise about whether the level of management science the industry plans to use is sufficient to navigate the challenges ahead.

According to data from the Energy Information Administration (EIA), nuclear power comprises 20 percent of the U.S. capacity, producing approximately 106 gigawatts (GW), with 66 plants that house 104 reactor units. To date, more than 30 new reactors have been proposed, which will produce a net increase of approximately 19 GW of nuclear capacity through 2030. Considering the growth of energy demand, this increased capacity will barely keep pace with increasing base load requirements.

According to Assistant Secretary for Nuclear Energy Dennis Spurgeon, we will need approximately 45 new reactors online by 2030 just to maintain 20 percent share of U.S. electricity generation nuclear power already holds.

Meanwhile, Morgan Stanley vice chairman Jeffrey Holzschuh is very positive about the next generation of nuclear power but warns that the industry’s future is ultimately a question of economics. “Given the history, the markets will be cautious,” he says.

As shown in Figures 1-3, nuclear power is cost competitive with other forms of generation, but its upfront capital costs are comparatively high. Historically, long construction periods have led to serious cost volatility. The viability of the nuclear power industry ultimately depends on its ability to demonstrate that plants can be built economically and reliably. Holzschuh predicts, “The first few projects will be under a lot of public scrutiny, but if they are approved, they will get funded. The next generation of nuclear power will likely be three to five plants or 30, nothing in between.”

Due to its cohesive identity, the nuclear industry is viewed by the public and investors as a single entity, making the fate of industry operators – for better or for worse – a shared destiny. For that reason, it’s widely believed that if these first projects suffer the same sorts of significant cost over-runs and delays experienced in the past, the projected renaissance for the industry will quickly revert to a return to the dark ages.

THE PLAYERS

Utility companies, regulatory authorities, reactor manufacturers, design and construction vendors, financiers and advocacy groups all have critical roles to play in creating a viable future for the nuclear power industry – one that will begin with the successful completion of the first few plants in the United States. By all accounts, an impressive foundation has been laid, beginning with an array of government incentives (as loan guarantees and tax credits) and simplified regulation to help jump-start the industry.

Under the Energy Policy Act of 2005, the U.S. Department of Energy has the authority to issue $18.5 billion in loan guarantees for new nuclear plants and $2 billion for uranium enrichment projects. In addition, there’s standby support for indemnification against Nuclear Regulatory Commission (NRC) and litigation-oriented delays for the first six advanced nuclear reactors. The Treasury Department has issued guidelines for an allocation and approval process for production tax credits for advanced nuclear: 1.8 cents per kilowatt-hour production tax credit for the first eight years of operation with the final rules to be issued in fiscal year 2008.

The 20-year renewal of the Price- Andersen Act in 2005 and anticipated future restrictions on carbon emissions further improve the comparative attractiveness of nuclear power. To be eligible for the 2005 production tax credits, a license application must be tendered to the NRC by the end of 2008 with construction beginning before 2014 and the plant placed in service before 2021.

The NRC has formulated an Office of New Reactors (NRO), and David Matthews, director of the Division of New Reactor Licensing, led the development of the latest revision of a new licensing process that’s designed to be more predictable by encouraging the standardization of plant designs, resolving safety and environmental issues and providing for public participation before construction begins. With a fully staffed workforce and a commitment to “enable the safe, secure and environmentally responsible use of nuclear power in meeting the nation’s future energy needs,” Matthews is determined to ensure that the NRC is not a risk factor that contributes to the uncertainty of projects but rather an organizing force that will create predictability. Matthews declares, “This isn’t your father’s NRC.”

This simplified licensing process consists of the following elements:

  • An early site permit (ESP) for locations of potential facilities.
  • Design certification (DC) for the reactor design to be used.
  • Combined operating license (COL) for the certified reactor as designed to be located on the site. The COL contains the inspections, tests, analyses and acceptance criteria (ITAAC) to demonstrate that the plant was built to the approved specifications.

According to Matthews, the best-case scenario for the time period between when a COL is docketed to the time the license process is complete is 33 months, with an additional 12 months for public hearings. When asked if anything could be done to speed this process, Matthews reported that every delay he’s seen thus far has been attributable to a cause beyond the NRC’s control. Most often, it’s the applicant that’s having a hard time meeting the schedule. Recently, approved schedules are several months longer than the best-case estimate.

The manufacturers of nuclear reactors have stepped up to the plate to achieve standard design certification for their nuclear reactors; four are approved, and three are in progress.

Utility companies are taking innovative approaches to support the NRC’s standardization principles, which directly impact costs. (Current conventional wisdom puts the price of a new reactor at between $4 billion and $5.5 billion, with some estimates of fully loaded costs as high as $7 billion.) Consortiums have been formed to support cross-company standardization around a particular reactor design. NuStart and UniStar are multi-company consortiums collaborating on the development of their COLs.

Leader of PPL Corp.’s nuclear power strategy Bryce Shriver – who recently announced PPL had selected UniStar to build its next nuclear facility – is impressed with the level of standardization UniStar is employing for its plants. From the specifics of the reactor design to the carpet color, UniStar – with four plants on the drawing board – intends to make each plant as identical as possible.

Reactor designers and construction companies are adding to the standardization with turnkey approaches, formulating new construction methods that include modular techniques; sophisticated scheduling and configuration management software; automated data; project management and document control; and designs that are substantially complete before construction begins. Contractors are taking seriously the lessons learned from plants built outside the United States, and they hope to leverage what they have learned in the first few U.S. projects.

The stewards of the existing nuclear fleet also see themselves as part of the future energy solution. They know that continued safe, high-performance operation of current plants is key to maintaining public and state regulator confidence. Most of the scheduled plants are to be co-located with existing nuclear facilities.

Financing nuclear plant construction involves equity investors, utility boards of directors, debt financiers and (ultimately) the ratepayers represented by state regulatory commissions. Despite the size of these deals, the financial community has indicated that debt financing for new nuclear construction will be available. The bigger issue lies with the investors. The more equity-oriented the risk (principally borne by utilities and ratepayers), the more caution there is about the structure of these deals. The debt financiers are relying on the utilities and the consortiums to do the necessary due diligence and put up the equity. There’s no doubt that the federal loan guarantees and subsidies are an absolute necessity, but this form of support is largely driven by the perceived risk of the first projects. Once the capability to build plants in a predictable way (in terms of time, cost, output and so on) has been demonstrated, market forces are expected to be very efficient at allocating capital to these kinds
of projects.

The final key to the realization of a nuclear renaissance is the public. Americans have become increasingly concerned about fossil fuels, carbon emissions and the nation’s dependence on foreign oil. The surge in oil prices has focused attention on energy costs and national security. Coal-based energy production is seen as an environmental issue. Although the United States has plenty of access to coal, dealing with carbon emissions using clean coal technology involves sequestering it and pumping it underground. PPL chairman Jim Miller describes the next challenge for clean coal as NUMBY – the “Not under my back yard” attitude the public is likely to adopt if forced to consider carbon pumped under their communities. Alternative energy sources such as wind, solar and geothermal enjoy public support, but they are not yet scalable for the challenge of cleaning up the grid. In general, the public wants clean, safe, reliable, inexpensive power.

THE RISKS

Will nuclear fill that bill and look attractive compared with the alternatives? Although progress has been made and the stage is set, critical issues remain, and they could become problematic. While the industry clearly sees and is actively managing some of these issues, there are others the industry sees but is not as certain about how to manage – and still others that are so much a part of the fabric of the industry that they go unrecognized. Any one of these issues could slow progress; the fact that there are several that could hit simultaneously multiplies the risk exponentially.

The three widely accepted risk factors for the next phase of nuclear power development are the variability of the cost of uranium, the availability of quality equipment for construction and the availability of well-trained labor. Not surprising for an industry that’s been relatively sleepy for several decades, the pipeline for production resources is weak – a problem compounded by the well-understood coming wave of retirements in the utility workforce and the general shortage of skilled trades needed to work on infrastructure projects. Combine these constraints with a surge in worldwide demand for power plants, and it’s easy to understand why the industry is actively pursuing strategies to secure materials and train labor.

The reactor designers, manufacturers and construction companies that would execute these projects display great confidence. They’re keen on the “turnkey solution” as a way to reduce the risk of multiple vendors pointing fingers when things go wrong. Yet these are the same firms that have been openly criticized for change orders and cost overruns. Christopher Crane, chief operating officer of the utility Exelon Corp., warned contractors in a recent industry meeting that the utilities would “not take all the risk this time around.” When faced with complicated infrastructure development in the past, vendors have often pointed to their expertise with complex projects. Is the development of more sophisticated scheduling and configuration management capability, along with the assignment of vendor accountability, enough to handle the complexity issue? The industry is aware of this limitation but does not as yet have strong management techniques for handling it effectively.

Early indications from regulators are that the COLs submitted to date are not meeting the NRC’s guidance and expectations in all regards, possibly a result of the applicants’ rush to make the 2008 year-end deadline for the incentives set forth in the Energy Policy Act. This could extend the licensing process and strain the resources of the NRC. In addition, the requirements of the NRC principally deal with public safety and environmental concerns. There are myriad other design requirements entailed in making a plant operate profitably.

The bigger risk is that the core strength of the industry – its ability to make significant incremental improvements – could also serve as the seed of its failure as it faces this next challenge. Investors, state regulators and the public are not likely to excuse serious cost overruns and time delays as they may have in the past. Utility executives are clear that nuclear is good to the extent that it’s economical. When asked what single concern they find most troubling, they often reply, “That we don’t know what we don’t know.”

What we do know is that there are no methods currently in place for beginning successful development of this next generation of nuclear power plants, and that the industry’s core management skill set may not be sufficient to build a process that differs from a “learn as you go” approach. Thus, it’s critical that the first few plants succeed – not just for their investors but for the entire industry.

THE OPPORTUNITY – KNOWING WHAT YOU DON’T KNOW

The vendors supporting the nuclear power industry represent some of the most prestigious engineering and equipment design and manufacturing firms in the world: Bechtel, Fluor, GE, Westinghouse, Areva and Hitachi. Despite this, the industry is not known for having a strong foundation in managing innovation. In a world that possesses complex physical capital and myriad intangible human assets, political forces and public opinion as well as technology are all required to get a plant to the point of producing power. Thus, more advanced management science could represent the missing piece of the puzzle for the nuclear power industry.

An advanced, decision-making framework can help utilities manage unpredictable events, increasing their ability to handle the planning and anticipated disruptions that often beset long, complex projects. By using advanced management science, the nuclear industry can take what it knows and create a learning environment to fi nd out more about what it doesn’t know, improving its odds for success.

Technology with vision for Today’s Utilities

Around the world, utilities are under pressure. Citizens demand that utilities provide energy and water without undermining environmental quality. Customers seek choice and convenience, and regulators respond with new market structures. Financial stakeholders look for operational efficiency at a time when aging workforces and infrastructures need replacement.

Pressures like these are forcing utilities to re-examine every aspect of the utility business, from supply to consumption. And no utility can handle those changes alone.

Oracle has positioned itself to become utilities’ software partner of choice in the quest to respond positively and completely to these pressures. To do so, Oracle brings together a worldwide team of utility experts, software applications that address mission-critical utility needs, a rock-solid suite of corporate operational software and world-leading middleware and technology.

The result: Flexible, innovative solutions that increase efficiency, improve stakeholder satisfaction and future-proof the organization.

Oracle has reshaped the utilities IT marketplace. During the past year, by acquiring two world leaders in utility-specific applications – SPL WorldGroup and Lodestar – Oracle has created Oracle Utilities, a new brand that establishes a unique portfolio of proven software, integrating industry-specific applications with the capabilities of Oracle Applications, Oracle Fusion Middleware and Oracle Database.

Oracle Utilities offers the world’s most complete suite of end-to-end information technology solutions for the gas, water and electric utilities that communities around the world depend on. Our revolutionary approach to providing utilities with the applications and expertise they need brings together:

  • Oracle Utilities solutions, utility-specific revenue and operations management applications:
    • Customer Care and Billing
    • Mobile Workforce Management
    • Network Management System
    • Work and Asset Management
    • Meter Data Management
    • Load Analysis
    • Load Profiling and Settlement
    • Portfolio Management
    • Quotations Management
    • Business Intelligence

These solutions are available stand-alone, or as an integrated suite.

  • Oracle’s ERP, database and infrastructure software:
    • Oracle E-Business Suite and other ERP applications
    • TimesTen and Sleepycat for real-time data management
    • Data hubs for customer and product master data management
    • Analytics that provide insight and customer intelligence
    • ContentDB, SpatialDB and RecordsDB for content management
    • Secure Enterprise Search for enterprise-wide search needs
  • Siebel CRM for larger competitive utilities’ call centers, specialized contacts and sales:
    • Most comprehensive solution for Sales, Service and Marketing
    • Complete out-of-the box solution that’s easy to tailor to your needs
    • Results such as percentage increase in sales pipeline, user adoption, opportunity-to-win ratios and doubled revenue growth

Stand-alone, each of these products meets utilities’ unique customer and service needs. Together, they enable multi-departmental business processes. The result is an unparalleled set of technologies that address utilities’ most pressing current and emerging issues.

THE VISION

Cross-organizational business processes and best practices are key to addressing today’s complex challenges. Oracle Utilities provides the path via which utilities may:

  • Advance customer care with:
    • Real-time 360-degree views of customer information
    • Tools to help customers save time and money
    • Ability to introduce or retire products and services quickly in response to emerging customer needs
  • Enhance revenue and operations management:
    • Avoid revenue leakage across end-to-end transactions
    • Increase the visibility and auditability of key business processes
    • Manage assets strategically
    • Bill for services and collect revenue cost-effectively
    • Increase field crew and network efficiency
    • Track and improve performance against goals
    • Achieve competitive advantage with a leading-edge infrastructure that helps utilities respond quickly to change
  • Reduce total cost of ownership through access to a single global vendor with:
    • Proven best-in-class utility management solutions
    • Comprehensive, world-class capabilities in applications and technology infrastructure
    • A global 24/7 distribution and support network with 7,000 service personnel
    • Over 14,000 software developers
    • Over 19,000 partners
  • Address the “Green Agenda”:
    • Help reduce pollution
    • Increase efficiency

STRATEGIC TECHNOLOGY FOR THE EMERGING UTILITY

Today’s utility is beset by urgent issues – environmental concerns, rising costs, aging workforces, changing markets, regulatory demands and rising stakeholder expectations.

Oracle Utilities can help meet these challenges by providing the leading mission-critical utilities suite in the marketplace today. Oracle integrates industry-specific customer care and billing, network management, work and asset management, mobile workforce management and meter data management applications with the capabilities of Oracle’s industry-leading enterprise applications, business intelligence tools, middleware and database technologies. We enable customers to adapt more nimbly to market deregulation, help them meet ever-evolving customer demands, enhance operational excellence and deliver on commitments to environmental conservation.

Oracle Utilities’ flexible, standards-based applications and architecture help utilities innovate. They lead toward coherent technology solutions. Oracle helps utilities keep pace with change without losing focus on the energy, water and waste services fundamental to local and global human and economic welfare.

Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the utilities industry – from generation and transmission to distribution and retail services. And when you run Oracle applications on Oracle technology, you speed implementation, optimize performance and maximize ROI.

Utilities today need a suite of software applications and technology to serve as a robust springboard from which to meet the challenges of the future.

Oracle offers that suite.

Oracle Utilities solutions enable you to meet tomorrow’s customer needs while addressing the varying concerns of financial stakeholders, employees, communities and governments. We work with you to address emerging issues and changing business conditions. We help you to evolve to take advantage of new technology directions and to incorporate innovation into ongoing activity.

Partnering with Oracle helps you to future-proof your utility.

CONTACT US

For more information, call +1.800.275.4775 to speak to an Oracle representative, or visit oracle.com/industries/utilities.

Copyright © 2008, Oracle. All rights reserved. Published in the U.S.A. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.