Online Transient Stability Controls

For the last few decades the growth of the world’s population and its corresponding increased demand for electrical energy has created a huge increase in the supply of electrical power. However, for logistical, environmental, political and social reasons, this power generation is rarely near its consumers, necessitating the growth of very large and complex transmission networks. The addition of variable wind energy in remote locations is only exacerbating the situation. In addition the transmission grid capacity has not kept pace with either generation capacity or consumption while at the same time being extremely vulnerable to potential large-scale outages due to outdated operational capabilities.

For example, today if a fault is detected in the transmission system, the only course is to shed both load and generation. This is often done without consideration for real-time consequences or alternative analysis. If not done rapidly, it can result in a widespread, cascading power system blackout. While it is necessary to remove factors that might lead to a large-scale blackout, restriction of power flow or other countermeasures against such a failure, may only achieve this by sacrificing economical operation. Thus, the flexible and economical operation of an electric power system may often be in conflict with the requirement for improved supply reliability and system stability.

Limits of Off-line Approaches

One approach to solving this problem involves stabilization systems that have been deployed for preventing generator step-out by controlling the generator acceleration through power shedding, in which some of the generators are shut off at the time of a power system fault.

In 1975, an off-line special protection system (SPS) for power flow monitoring was introduced to achieve the transient stability of the trunk power system and power source system after a network expansion in Japan. This system was initially of the type for which settings were determined in advance by manual calculations using transient stability simulation programs assuming many contingencies on typical power flow patterns.

This type of off-line solution has the following problems:

  • Planning, design, programming, implementation and operational tasks are laborious. A vast number of simulations are required to determine the setting tables and required countermeasures, such as generator shedding, whenever transmission lines are constructed;
  • It is not well suited to variable generations sources such as wind or photovoltaic farms;
  • It is not suitable for reuse and replication, incurring high maintenance costs; and
  • Excessive travel time and related labor expense is required for the engineer and field staff to maintain the units at numerous sites.

By contrast, an online TSC solution employs various sensors that are placed throughout the transmission network, substations and generation sources. These sensors are connected to regional computer systems via high speed communications to monitor, detect and execute contingencies on transients that may affect system stability. These systems in turn are connected to centralized computers which monitor the network of distributed computers, building and distributing contingencies based on historical and recent information. If a transient event occurs, the entire ecosystem responds within 150 ms to detect, analyze, determine the correct course of action, and execute the appropriate set of contingencies in order to preserve the stability of the power network.

In recent years, high performance computational servers have been developed and their costs have been reduced enough to use many of them in parallel and/or in a distributed computing architecture. This results in a system that not only provides a benefit in greatly increasing the availability and reliability of the power system, but in fact, can best optimize the throughput of the grid. Thus not only has system reliability improved or remained stable, but the network efficiency itself has increased without a significant investment in new transmission lines. This has resulted in more throughput within the transmission grid, without building new transmission lines.

Solution and Elements

In 1995, for the first time ever, an online TSC system was developed and introduced in Japan. This solution provided a system stabilization procedure required by the construction of the new 500kV trunk networks of Chubu Electric Power Co. (CEPCO) [1-4]. Figure 1 shows the configuration of the online TSC system. This system introduced a pre-processing online calculation in the TSC-P (parent) besides a fast, post-event control executed by the combination of TSC-C (child) and TSC-T (terminal). This online TSC system can be considered an example of a self-healing solution of a smart grid. As a result of periodic simulations using the online data in TSC-P, operators of energy management systems/supervisory control and data acquisition (EMS/ SCADA) are constantly made aware of stability margins for current power system situations.

Using the same online data, periodic calculations performed in the TSC-P can reflect power network situations and the proper countermeasures to mitigate transient system events. The TSC-P simulates transient stability dynamics on about 100 contingencies of the power systems for 500 kV, 275 kV and 154 kV transmission networks. The setting tables for required countermeasures, such as generator shedding, are periodically sent to the TSC-Cs located at main substations. The TSC-Ts located at generation stations, shed the generators when the actual fault occurs. The actual generator shedding by the combination of TSC-Cs and TSC-Ts is completed within 150 ms after the fault to maintain the system’s stability.

Customer Experiences and Benefits

Figure 2 shows the locations of online TSC systems and their coverage areas in CEPCO’s power network. There are two online TSC systems currently operating; namely, the trunk power TSC system, to protect the 500 kV trunk power system introduced in 1995, and the power source TSC system to protect the 154 kV to 275 kV power source systems around the generation stations.

Actual performance data have shown some significant benefits:

  • Total transfer capability (TTC) is improved through elimination of transient stability limitations. TTC is decided by the minimum value of limitations given by not only thermal limit of transmission lines but transient stability, frequency stability, and voltage stability. Transient stability limits often determines the TTC in the case of long transmission lines from generation plants. CEPCO was able to introduce high-efficiency, combined-cycle power plants without constructing new transmission lines. TTC was increased from 1,500 MW to 3,500 MW by introducing the on-line TSC solution.
  • Power shedding is optimized. Not only is the power flow of the transmission line on which a fault occurs assessed, but the effects of other power flows surrounding the fault point are included in the analysis to decide the precise stability limit. The online TSC system can also reflect the constraints and priorities of each generator to be shed. To ensure a smooth restoration after the fault, restart time of shut off generators, for instance, can also be included.
  • When constructing new transmission lines, numerous off-line studies assuming various power flow patterns are required to support off-line SPS. After introduction of the online TSC system, new construction of transmission lines was more efficient by changing the equipment database for the simulation in the TSC-P.

In 2003, this CEPCO system received the 44th Annual Edison Award from the Edison Electric Institute (EEI), recognizing CEPCO’s achievement with the world’s first application of this type of system, and the contribution of the system to efficient power management.

Today, benefits continue to accrue. A new TSC-P, which adopts the latest high-performance computation servers, is now under construction for operation in 2009 [3]. The new system will shorten the calculation interval from every five minutes to every 30 seconds in order to reflect power system situations as precisely as possible. This interval was determined by the analysis of various stability situations recorded by the current TSC-P over more than 10 years of operation.

Additionally, although the current TSC-P uses the same online data as used by EMS/ SCADA, it can control emergency actions against small signal instability by receiving phasor measurement unit (PMU) data to detect divergences of phasor angles and voltages among the main substations.

Summary

The online TSC system is expected to realize optimum stabilization control of recent complicated power system conditions by obtaining power system information online and carrying out stability calculations at specific intervals. The online TSC will thus help utilities achieve better returns on investment in new or renovated transmission lines, reducing outage time and enabling a more efficient smart grid.

References

  1. Ota, Kitayama, Ito, Fukushima, Omata, Morita and Y. Kokai, “Development of Transient Stability Control System (TSC System) Based on Online Stability Calculation”, IEEE Trans. on Power System, Vol. 11, No. 3, pp. 1463-1472, August 1996.
  2. Koaizawa, Nakane, Omata and Y. Kokai, “Acutual Operating Experience of Online Transient Stability Control System (TSC System), IEEE PES Winter Meeting, 2000, Vol. 1, pp 84-89.
  3. Takeuchi, Niwa, Nakane and T. Miura
    “Performance Evaluation of the Online Transient Stability Control System (Online TSC System)”, IEEE PES General Meeting , June 2006.
  4. Takeuchi, Sato, Nishiiri, Kajihara, Kokai and M. Yatsu, “Development of New Technologies and Functions for the Online TSC System”, IEEE PES General Meeting , June 2006.

Infrastructure and the Economy

With utility infrastructure aging rapidly, reliability of service is threatened. Yet the economy is hurting, unemployment is accelerating, environmental mandates are rising, and the investment portfolios of both seniors and soon-to-retire boomers have fallen dramatically. Everyone agrees change is needed. The question is: how?

In every one of these respects, state regulators have the power to effect change. In fact, the policy-setting authority of the states is not only an essential complement to federal energy policy, it is a critical building block for economic recovery.

There is no question we need infrastructure development. Almost 26 percent of the distribution infrastructure owned and operated by the electric industry is at or past the end of its service life. For transmission, the number is approximately 15 percent, and for generation, about 23 percent. And that’s before considering the rising demand for electricity needed to drive our digital economy.

The new administration plans to spend hundreds of billions of dollars on infrastructure projects. However, most of the money will go towards roads, transportation, water projects and waste water systems, with lesser amounts designated for renewable energy. It appears that only a small portion of the funds will be designated for traditional central station generation, transmission and distribution. And where such funds are available, they appear to be in the form of loan guarantees, especially in the transmission sector.

The U.S. transmission system is in need of between $50 billion and $100 billion of new investment over the next 10 years, and approximately $300 billion by 2030. These investments are required to connect renewable energy sources, make the grid smarter, improve electricity market efficiency, reduce transmission-related energy losses, and replace assets that are too old. In the next three years alone, the investor-owned utility sector will need to spend about $30 billion on transmission lines.

Spending on distribution over the next decade could approximate $200 billion, rising to $600 billion by 2030. About $60 billion to $70 billion of this will be spent in just the next three years.

The need for investment in new generating stations is a bit more difficult to estimate, owing to the uncertainties surrounding the technologies that will prove the most economic under future greenhouse gas regulations and other technology preferences of the Congress and administration. However, it could easily be somewhere between $600 billion and $900 billion by 2030. Of this amount, between $100 billion and $200 billion could be invested over the next three years and as much as $300 billion over the next 10. It will be mostly later in that 10-year period, and beyond, that new nuclear and carbon-compliant coal capacity is expected to come on line in significant amounts. That will raise generating plant investments dramatically.

Jobs, and the Job of Regulators

All of this construction would maintain or create a significant number of jobs. We estimate that somewhere between 150,000 and 300,000 jobs could be created annually by this build out, including jobs related to construction, post-construction utility operating positions, and general economic "ripple effect" jobs through 2030.

These are sustainable levels of employment – jobs every year, not just one-time surges.

In addition, others have estimated that the development of the smart grid could add between 150,000 and 280,000 jobs. Clearly, then, utility generation, transmission and distribution investments can provide a substantial boost for the economy, while at the same time improving energy efficiency, interconnecting critical renewable energy sources and making the grid smarter.

The beauty is that no federal legislation, no taxpayer money and no complex government grant or loan processes are required. This is virtually all within the control of state regulators.

Timely consideration of utility permit applications and rate requests, as well as project pre-approvals by regulators, allowance of construction work in progress in rate base, and other progressive regulatory practices would vastly accelerate the pace at which these investments could be made and financed, and new jobs created. Delays in permitting and approval not only slow economic recovery, but also create financial uncertainty, potentially threatening ratings, reducing earnings and driving up capital costs.

Helping Utility Shareholders

This brings us to our next point: Regulators can and should help utility shareholders. Although they have a responsibility for controlling utility rates charged to consumers, state regulators also need to provide returns on equity and adopt capital structures that recognize the risks, uncertainties and investor expectations that utilities face in today’s and tomorrow’s very de-leveraged and uncertain financial markets.

It is now widely acknowledged that risk has not been properly priced in the recent past. As with virtually all other industries, equity will play a far more critical role in utility project and corporate finance than in the past. For utilities to attract the equity needed for the buildout just described, equity must earn its full, risk-adjusted return. This requires a fresh look at stockholder expectations and requirements.

A typical utility stockholder is not some abstract, occasionally demonized, capitalist, but rather a composite of state, city, corporate and other pension funds, educational savings accounts, individual retirement accounts and individual shareholders who are in, or close to, retirement. These shares are held largely by, or for the benefit of, everyday workers of all types, both employed and retired: government employees, first responders, trades and health care workers, teachers, professionals, and other blue and white collar workers throughout the country.

These people live across the street from us, around the block, down the road or in the apartments above and below us. They rely on utility investments for stable income and growth to finance their children’s education, future home purchases, retirement and other important quality-of-life activities. They comprise a large segment of the population that has been injured by the economy as much as anyone else.

Fair public policy suggests that regulators be mindful of this and that they allow adequate rates of return needed for financial security. It also requires that regulatory commissions be fair and realistic about the risk premiums inherent in the cost of capital allowed in rate cases.

The cost of providing adequate returns to shareholders is not particularly high. Ironically, the passion of the debate that surrounds cost of capital determinations in a rate case is far greater than the monetary effect that any given return allowance has on an individual customer’s bill.

Typically, the differential return on equity at dispute in a rate case – perhaps between 100 and 300 basis points – represents between 0.5 and 2 percent of a customer’s bill for a "wires only" company. (The impact on the bills of a vertically integrated company would be higher.) Acceptance of the utility’s requested rate of return would no doubt have a relatively small adverse effect on customers’ bills, while making a substantial positive impact on the quality of the stockholders’ holdings. Fair, if not favorable, regulatory treatment also results in improved debt ratings and lower debt costs, which accrue to the benefit of customers through reduced rates.

The List Doesn’t Stop There

Regulators can also be helpful in addressing other challenges of the future. The lynchpin of cost-effective energy and climate change policy is energy efficiency (EE) and demand side management (DSM).

Energy efficiency is truly the low-hanging fruit, capable of providing immediate, relatively inexpensive reductions in emissions and customers bills. However, reductions in customers’ energy use runs contrary to utility financial interests, unless offset by regulatory policy that removes the disincentives. Depending upon the particulars of a given utility, these policies could include revenue decoupling and the authorization of incentive – or at least fully adequate – returns on EE, DSM and smart grid investments, as well as recovery of related expenses.

Additional considerations could include accelerated depreciation of EE and DSM investments and the approval of rate mechanisms that recover lost profit margins created by reduced sales. These policies would positively address a host of national priorities in one fell swoop: the promotion of energy efficiency, greenhouse gas reduction, infrastructure investment, technology development, increased employment and, through appropriate rate base and rate of return policy, improved stockholder returns.

The Leadership Opportunity

Oftentimes, regulatory decision making is narrowly focused on a few key issues in isolation, usually in the context of a particular utility, but sometimes on a statewide generic basis. Rarely is state regulatory policy viewed in a national context. Almost always, issues are litigated individually in high partisan fashion, with little integration as part of a larger whole where utility shareholder interests are usually underrepresented.

The time seems appropriate – and propitious – for regulators to lead the way to a major change in this paradigm while addressing the many urgent issues that face our nation. Regulators can make a difference, probably far beyond that for which they presently give themselves credit.

The Smart Grid in Malta

On the Mediterranean island of Malta, with a population of about 400,000 people on a land mass of just over 300 square kilometers, power, water and the economy are intricately linked. The country depends on electrically powered desalination plants for over half of its water supply. In fact, about 75 percent of the cost of water from these plants on Malta is directly related to energy production. Meanwhile, rising sea levels threaten Malta’s underground freshwater source.

Additionally, in line with the Lisbon strategy and the other European countries, the government of Malta has set an objective of transforming the island into a competitive knowledge economy to encourage investment by foreign companies. Meeting all of these goals in a relatively short period of time presents a complex, interconnected series of challenges that require immediate attention to ensure the country has a sustainable and prosperous future.

In light of this need, the Maltese National Utilities for Electricity and Water – Enemalta Corp. (EMC) and Water Services Corp. (WSC) – reached a partnership agreement with IBM to undertake a complete transformation of its distribution networks to improve operational efficiency and customer service levels. IBM will replace all 250,000 electricity meters with new devices, and connect these and the existing water meters to advanced information technology applications. This will enable remote reading, management and monitoring throughout the entire distribution network.

This solution will be integrated with new back-office applications for finance, billing and cash processes, as well as an advanced analytics tool to transform sensor data into valuable information supporting business decisions and improving customer service levels. It will also include a portal to enable closer interaction with – and more engagement by – the end consumers.

Why are the utility companies in Malta making such a significant investment to reshape their operations? To explore this question, it helps to start with a broader look at smart grid projects to see how they create benefits – not just for the companies making the investment, but for the local community as well.

Smart Grid Benefits

A case is often made that basic operational benefits of a smart grid implementation can be achieved largely through an Advanced Metering Infrastructure (AMI) implementation, which yields real-time readings for use in billing cycles, reduced operational cost in the low voltage network and more control over theft and fraud. In this view, the utility’s operational model is further transformed to improve customer relationship management through the introduction of flexible tariffs, remote customer connection/disconnection, power curtailment options and early outage identification through low voltage grid monitoring.

But AMI extended to a broader smart grid implementation has the potential to achieve even greater strategic benefits. One can see this by simply considering the variety of questions about the impact of the carbon footprint of human activity on the climate and other environmental factors. What is a realistic tradeoff between energy consumption, energy efficiency and economic and political dependencies on the local, national and international levels? Which energy sources will be most effective with such tradeoffs? To what extent can smaller, renewable resources replace today’s large, fossil-based power sources? Where this is possible, how can hundreds or thousands of dispersed, independently operated generators be effectively monitored?

Ultimately, distribution networks need to be smart enough to distinguish among today’s large-scale utility generators; customers producing solar energy for their own needs who are virtually disconnected from the grid; those using a wind power generator and injecting the surplus back into the grid; and end-use customers requiring marginal or full supply. An even more dispersed model for distributed generation will emerge once electric vehicles circulate in towns, placing complex new demands on the grid while offering the benefit of new storage capabilities to the network.

Interdependence

Together, water and power distributors, transmission operators, generators, market regulators and final customers will interact in a much more complex, interconnected and interdependent world. This is especially true in a densely populated, modern island ecosystem, where the interplay of electricity, water, gas, communications and other services is magnified.

These points of intersection take numerous shapes. For example, on a national scale, water and sewer services can consume a large portion of the available energy supply. Water service, which is essential to customer quality of life, also presents distribution issues that are similar in many ways to those embedded in the electric grid. At a more local scale, co-generation and micro-CHP generation plants make the interdependency of electricity and gas more visible. Furthermore, utilities’ experience at providing centrally managed services that afford comfort and convenience makes the provision of additional services – communication, security, and more – imaginable. But how to make these interconnections effective contributors to quality of life raises real economic questions. Is it sensible to make an overarching investment in multiple services? How can this drive increased operational efficiency and bring new benefits to customers? Can a clear return on investment be demonstrated to investors and bill payers?

Malta is an example of an island that operates a vertically integrated and isolated electricity system. Malta has no connections with the European electricity grid and no gas pipelines to supply its generators. In the current configuration of the energy infrastructure, all of its demand must be fulfilled by the two existing power plants, which generate power using entirely imported fossil fuel. Because of these limitations on supply, and dependencies on non-native resources, electricity distribution must be extremely efficient, limiting any loss of energy as much as possible. Both technical and commercial losses must be kept fully under control, and theft must be effectively eliminated from the system to avoid unfair social accounting and to ensure proper service levels to all customers.

Estimates of current economic losses in Malta are in the millions of Euros for just the non-technical losses. At these levels, and with limited generation capacity, quality of service and ability to satisfy demand at all times is threatened. Straining the system even further is the reality that Malta, without significant natural water sources, must rely on a seawater purification process to supply water to its citizens. This desalinization process absorbs roughly one-third of the annual power consumption on the island.

But the production process is not the only source of interdependency of electricity and water as the distribution principles of each have strong ties. In most locations in the world, electricity and water distribution have opposing characteristics that allow them to enjoy some symbiotic benefits. Electricity cannot be effectively stored, so generation needs to match and synchronize in time with demand. Water service generally has the opposite characteristic: in fact, it can be stored so easily that it is frequently stored as pre-generation capacity in hydro generation.

But on an island like Malta, this relationship is turned on its head. There is no natural water to store, and once produced, purified water should be consumed rather quickly. If it is produced in excess, then reservoir evaporation and pipeline losses can affect the desalinization effort and the final efficiency of the process. So in Malta, unlike much of the rest of the world, water providers tend to view customer demand in a similar way as electricity providers, and the demand profiles are unable to support each other as they can elsewhere.

These are qualitative observations. But if electricity and water networks can be monitored, and real-time data supplied, providers can begin to assess important questions regarding operational and financial optimization of the system, which will, among other benefits, improve reliability and service quality and keep costs low.

Societal Implications

An additional issue the government of Malta faces is its effort to ensure that the population has a sufficient and diverse educational and technical experience base. When a company is attracted to invest in Malta, it benefits from finding local natives with appropriate skills to employ; costs increase if too many foreign nationals must be brought in to operate the company. Therefore, pervasive education on information and communication technology-related topics is a priority for the government, aimed at young students, as well as adult citizens.

Therein lies a further – but no less important – benefit of bringing a smart grid to Malta. Energy efficiency campaigns supported by smart meters will not only help its citizens control consumption behavior and make more efficient and effective electricity and water operations a reality, but they will prove to be a project that helps raise the island’s technology culture in a new dimension. Meter installers will deal with palmtop and other advanced IT applications, learning to connect the devices not only to the physical electrical infrastructure, but also to the embedded information infrastructure. From smart home components to value-added services, commercial and industrial players will look to new opportunities that leverage the smart grid infrastructure in Malta as well, adding highly skilled jobs and new businesses to the Maltese economy.

Benefits will expand down to the elementary education levels as well. For example, it will be possible for schools to visit utility demonstration centers where the domestic meter can be presented as an educational tool. This potential includes making energy efficiency a door to educational programs on responsible citizenship, science, mathematics, environmental sustainability and many other key learning areas. Families will find new incentive to become familiar with the Internet as they connect to the utility’s website to control their energy bill and investigate enhanced tariffs for more cost-effective use of basic services.

Conclusion

Malta is famed for its Megalithic Temples – the oldest free-standing buildings in Europe, older than the Pyramids of Egypt [1]. But with its smart grid project, it stands to be the home of one of the newest and most advanced infrastructure projects as well. The result of the Maltese smart grid effort will be an end-to-end electricity and water transmission and distribution system. It will not only enable more efficient consumption of energy and water, but will completely transform the relationship of Maltese consumers with the utilities, while enhancing their education and employment prospects. These benefits go well beyond the traditional calculation of benefits of, for example, a simple AMI-focused project, and demonstrate that a smart grid project in an island environment can go well beyond simply improving utility operations. It can transform the entire community in ways that will improve the quality of life in Malta for generations to come.

Reference:

  1. 1 The Bradshaw Foundation, 2009

An Australian Approach to Energy Innovation and Collaboration

Just as global demand for energy is
steadily increasing, so too, are the
recognized costs of power generation.
A recent report about the possibility
of creating a low-emissions future by Australia’s
Treasury noted that electricity production
currently accounts for 34 percent
of the nation’s net greenhouse gas emissions,
and that it was the fastest-growing
contributor to greenhouse gas emissions
over the period from 1990 to 2006 [1].

This growing realization of the true
cost of energy production will be brought
into stark relief, with the likely implementation
of a national emissions trading
scheme in 2010.

Australia’s energy producers are entering
an era of great change, with increasing
pressure to drive efficiencies in both the
supply and demand sides of their businesses.
These pressures manifest themselves
in the operation of energy and utilities
organizations in three basic needs:

  • To tighten the focus on delivering value,
    within the paradigm of achieving more
    with less, and while concentrating on
    their core business;
  • To exploit the opportunities of an industry
    in transformation, and to build new
    capabilities; and
  • To act with speed in terms of driving
    leadership, setting the agenda, managing
    change and leveraging experience
    – all while managing risk.

The net effect of the various government
initiatives and mandates around energy
production is to drive energy and utility
companies to deliver power more responsibly
and efficiently. The most obvious
evidence of this reaction is the development
of advanced metering infrastructure
(AMI) and intelligent network (IN) programs
across Australia. Yet a more fundamental
change is also starting to emerge – a
change that is leading companies to work
more openly and collaboratively toward a
smarter energy value chain.

This renewed sense of purpose gives
energy and utilities organizations an opportunity
to think and act in dynamic new ways
as they re-engineer their operations to:

  • Transform the grid from a rigid, analog
    system to a responsive and automated
    energy delivery system by driving operational
    excellence;
  • Empower consumers and improve their
    satisfaction by providing them with near
    real-time, detailed information about
    their energy usage; and
  • Reduce greenhouse gas emissions to
    meet or exceed environmental regulatory
    requirements while maintaining a
    sufficient, cost-effective power supply.

A Global Issue

In Australia, Country Energy, a leading
essential services corporation owned by
the New South Wales Government, is leading
the move to change not just its own
organization, but the entire electricity
supply industry.

With the strength of around 4,000
employees, and Australia’s largest power
supply network covering 95 percent of
New South Wales’ landmass, Country
Energy recognized the scale and scope of
this industry challenge meant no single
player could find all the answers by himself.

A Powerful Alliance

Formed by IBM, the Global Intelligent
Utilities Network (IUN) Coalition represents
a focused and collaborative effort
to address the many economic, social and
environmental pressures facing these
organizations as they shape, accelerate
and share in the development of the
smart grid. Counting just one representative
organization from each major urban
electricity market, the coalition will collaborate
to enable the rapid development of solutions, adoption of open industry-based
standards, and creation of informed
policy and regulation.

Not only does the coalition believe
these three streams of collaboration will
help drive the adoption of the IUN, or
smart grid, in markets across the planet,
but the sharing of best practice information
and creation of a unified direction for
the industry will help reduce regulatory,
financial, market and implementation
risks. And, like all productive collaborative
relationships, the rewards for individual
members are likely to become amplified as
the group grows, learns and shares.

Global Coalition, Local Results

As Australia’s only member of the coalition,
Country Energy has been quick to
capitalize on – and contribute to – the
benefits of the global knowledge base,
adapting the learnings from overseas
operators in both developed and emerging
markets, and applying them to the unique
challenges of a huge landmass with a
decentralized population.

From its base in a nation rich in natural
resources, the Australian energy and utilities
industry is quickly moving to adapt to
the emergence of a carbon economy.

One of Country Energy’s key projects in
this realm is the development of its own
Intelligent Network (IN), providing the
platform for developing its future network
strategy, incorporating distributed generation
and storage, as well as enabling consumer
interaction through the provision of
real-time information on energy consumption,
cost and greenhouse footprint.

Community Collaboration

Keen to understand how the IN will work
for customers and its own employees,
Country Energy is moving the smart grid
off the page and into real life.

Designed to demonstrate, measure and
evaluate the technical and commercial
viability of IN initiatives, two communities
have been identified by Country Energy,
with the primary goal of learning from
both the suitability of the solutions implemented
and the operational partnership
models by which they will be delivered.

These two IN communities are intended
to provide a live research environment
to evaluate current understandings and
technologies, and will include functionality
across nine areas, including smart meters,
electrical network monitoring and control,
and consumer interaction and response.

Demonstrating the Future

In preparing to put the digital age to
work, and to practically demonstrate to
stakeholders what an IN will deliver, Country
Energy has developed Australia’s first
comprehensive IN Research and Demonstration
Centre near Canberra.

This interactive centre shows what the power network of the not-too-distant
future will look like and how it will
change the way power is delivered, managed
and used.

The centre includes a residential setting
to demonstrate the “smart home of
the future,” while giving visitors a preview
of an energy network that automatically
detects where a power interruption
occurs, providing up-to-date information
to network operators and field crews.

An initiative as far-reaching as the IN will
rely on human understanding as much as it
does on technology and infrastructure.

Regional Delivery Model

In addition to the coalition, IBM and
Country Energy developed and implemented
an innovative new business model
to transform Country Energy’s application
development and support capability. In
2008, Country Energy signed a four-year
agreement with IBM to establish a regional development centre, located in
the city of Bathurst.

The centre is designed to help maximize
cost efficiencies, accelerate the pace of
skills transfer through close links with the
local higher-education facility, Charles
Sturt University, and support Country
Energy’s application needs as it moves
forward on its IN journey. The centre is also
providing services to other IBM clients.

Through the centre, Country Energy
aims to improve service levels and innovations
delivered to its business via skills
transfer to Country Energy. The outcome
also allows Country Energy to meet its
commitment to support regional areas
and offers a viable alternative to global
delivery models.

Looking to the Future

In many ways, the energy and utilities
industry has come to symbolize the crossroads
that many of the planet’s systems find themselves at this moment in time:
legacy systems are operating in an economic
and environmental ecosystem that
is simply unable to sustain current levels –
let alone, the projected demands of global
growth.

Yet help is at hand, infusing these systems
with the instrumentation to extract
real-time data from every point in the
value chain, interconnecting these points
to allow the constant, back-and-forward
fl ow of information, and finally, employing
the power of analytics to give these systems
the gift of intelligence.

In real terms, IBM and Country Energy
are harnessing the depth of knowledge
and expertise of the Global IUN Coalition,
collaborating to help change the way the
industry operates at a fundamental level
in order to create an IN. This new smart
grid will operate as an automated energy
delivery system, empowering consumers
and improving their satisfaction by providing
them with near real-time, detailed
information about their energy usage.

And for the planet that these consumers
– and billions of others – rely upon,
Country Energy’s efforts will help reduce
greenhouse gas emissions while maintaining
that most basic building block of
human development: safe, dependable,
available and cost-effective power.

Reference

  1. 1 Commonwealth of Australia. Commonwealth
    Treasury. Australia’s Low Pollution
    Future: The Economics of Climate
    Change Mitigation. 30 October 2008.

Author’s Note: This customer story is based
on information provided by Country Energy
and illustrates how one organization uses IBM
products. Many factors have contributed to
the results and benefits described. IBM does
not guarantee comparable results elsewhere.

Power and Patience

The U.S. utility industry – particularly the electric-producing branch of it, there also are natural gas and water utilities – has found itself in a new, and very uncomfortable, position. Throughout the first quarter of 2009 it was front and center in the political arena.

Politics has been involved in the U.S. electric generation and distribution industry since its founding in the late 19th Century by Thomas Edison. Utilities have been regulated entities almost since the beginning and especially after the 1930s when the federal government began to take a much greater role in the direction and regulation of private enterprise and national economics.

What is new as we are about to enter the second decade of the 21st Century is that not only is the industry being in large part blamed for a newly discovered pollutant, carbon dioxide, which is naturally ubiquitous in the Earth’s atmosphere, but it also is being tasked with pulling the nation out of its worst economic recession since the Great Depression of the 1930s. Oh, and in your spare time, electric utilities, enable the remaking of the automobile industry, eliminate the fossil fuels which you have used to generate ubiquitous electricity for 100 years, and accomplish all this while remaining fiscally sound and providing service to all Americans. Finally, please don’t make electricity unaffordable for the majority of Americans.

It’s doubtful that very many people have ever accused politicians of being logical, but in 2009 they seem to have decided to simultaneously defy the laws of physics, gravity, time, history and economics. They want the industry to completely remake itself, going from the centralized large-plant generation model created by Edison to widely dispersed smaller-generation; from fossil fuel generation to clean “renewable” generation; from being a mostly manually controlled and maintained system to becoming a self-healing ubiquitously digitized and computer-controlled enterprise; from a marginally profitable (5-7 percent) mostly privately owned system to a massive tax collection system for the federal government.

Is all this possible? The answer likely is yes, but in the timeframe being posited, no.

Despite political co-option of the terms “intelligent utility” and “smart grid” in recent times, the electric utility industry has been working in these directions for many years. Distribution automation (DA) – being able to control the grid remotely – is nothing new. Utilities have been working on DA and SCADA (supervisory control and data acquisition) systems for more than 20 years. They also have been building out communications systems, first analog radio for dispatching service crews to far-flung territories, and in recent times, digital systems to reach all of the millions of pieces of equipment they service. The terms themselves were not invented by politicians, but by utilities themselves.

Prior to 2009, all of these concepts were under way at utilities. WE Energies has a working “pod” of all digital, self-healing, radial-designed feeders that works. The concept is being tried in Oklahoma, Canada and elsewhere. But the pods are small and still experimental. Pacific Gas and Electric, PEPCO and a few others have demonstration projects of “artificial intelligence” on the grid to automatically switch power around outages. TVA and several others have new substation-level servers that allow communications with, data collection from and monitoring of IEDs (Intelligent electrical devices) while simultaneously providing a “view” into the grid from anywhere else in the utility, including the boardroom. But all of these are relatively small-scale installations at this point. To distribute them across the national grid is going to take time and a tremendous amount of money. The transformation to a smart grid is under way and accelerating. However, to this point, the penetration is relatively small. Most
of the grid still is big and dumb.

Advanced metering infrastructure (AMI) actually was invented by utilities, although vendors serving the industry have greatly advanced the art since the mid-1990s. Utilities installed earlier-generation AMI, called automated meter reading (AMR) for about 50 percent of all customers, although the other 50 percent still were being read by meter readers traipsing through people’s yards.

AMI, which allows two-way communications with the meters (AMR is mostly one-way), is advancing rapidly, but still has reached less than 20 percent of American homes, according to research by AMI guru Howard Scott and Sierra Energy Group, the research and analysis division of Energy Central. Large-scale installations by Southern Company, Pacific Gas and Electric, Edison International and San Diego Gas and Electric, are pushing that percentage up rapidly in 2009, and other utilities were in various stages of pilots. The first installation of a true two-way metering system was at Kansas City Power & Light Co. (now Great Plains Energy) in the mid-1990s.

So the intelligent utility and smart grid were under development by utilities before politicians got into the act. However, the build-out was expected to take perhaps 30 years or more before completed down to the smallest municipal and co-operative utilities. Many of the smaller utilities haven’t even started pilots. Xcel Energy, Minneapolis, is building a smartgrid model in one city, Boulder, Col., but by May, 2009, two of the primary architects of the effort, Ray Gogel and Mike Carlson, had left Xcel. Austin Energy has parts of a smart grid installed, but it still reaches only a portion of Austin’s population and “home automation” reaches an even smaller proportion.

There are numerous “paper” models existent for these concepts. One, developed by Sierra Energy Group more than three years ago, is shown in Figure 1.

Major other portions of what is being envisioned by politicians have yet to be invented or developed. There is no reasonably priced, reasonably practical electric car, nor standardized connection systems to re-charge them. There are no large-scale transmission systems to reach remote windmill farms or solar-generating facilities and there is large-scale resistance from environmentalists to building such transmission facilities. Despite some political pronouncements, renewable generation, other than hydroelectric dams, still produces less than 3 percent of America’s electricity and that percentage is climbing very slowly.

Yes, the federal government was throwing some money at the build-out in early 2009, about $4 billion for smart grid and some $30-$45 billion at renewable energy. But these are drops in the bucket to the amount of money – estimated by responsible economists at $3 trillion or more – required just to build and replace the aging transmission systems and automate the grid. This is money utilities don’t have and can’t get without making the cost of electricity prohibitive for a large percentage of the population. Despite one political pronouncement, windmills in the Atlantic Ocean are not going to replace coal-fired generation in any conceivable time frame, certainly not in the four years of the current administration.

Then, you have global warming. As a political movement, global warming serves as a useful stick to the carrot of federal funding for renewable energy. However, the costs for the average American of any type of tax on carbon dioxide are likely to be very heavy.

In the midst of all this, utilities still have to go to public service commissions in all 50 states for permission to raise rates. If they can’t raise rates – something resisted by most PSCs – they can’t generate the cash to pay for this massive build-out. PSC commissioners also are politicians, by the way, with an average tenure of only about four years, which is hardly long enough to learn how the industry works, much less how to radically reconfigure it in a similar time-frame.

Despite a shortage of engineers and other highly skilled workers in the United States, the smart grid and intelligent utilities will be built in the U.S. But it is a generational transformation, not something that can be done overnight. To expect the utility industry to gear up to get all this done in time to “pull us out” of the most serious recession of modern times just isn’t realistic – it’s political. Add to the scale of the problem political wrangling over every concept and every dollar, mix in a lot of government bureaucracy that takes months to decide how to distribute deficit dollars, and throw in carbon mitigation for global warming and it’s a recipe for disaster. Expect the lights to start flickering along about…now. Whether they only flicker or go out for longer periods is out of the hands of utilities – it’s become a political issue.

Managing the Plant Data Lifecycle

Intelligent Plant Lifecycle Management
(iPLM) is the process of managing a
generation facility’s data and information
throughout its lifetime – from initial
design through to decommissioning. This
paper will look at results from the application
of this process in other industries
such as shipbuilding, and show how those
results are directly applicable to the
design, construction, operation and maintenance
of complex power generation
facilities, specifically nuclear and clean
coal plants.

In essence, iPLM can unlock substantial
business value by shortening plant development
times, and efficiently finding,
reusing and changing plant data. It also
enables an integrated and transparent
collaborative environment to manage
business processes.

Recent and substantial global focus on
greenhouse gas emissions, coupled with rising and volatile fossil fuel prices, rapid
economic growth in nuclear-friendly Asian
countries, and energy security concerns,
is driving a worldwide resurgence in commercial
nuclear power interest.

The power generation industry is
undergoing a global transformation that
is putting pressure on traditional methods
of operation, and opening the door to substantial
innovation. Due to factors such
as the transition to a carbon-constrained
world, which greatly affects a generation
company’s portfolio mix decisions, the
escalating constraints in the global supply
chain for raw materials and key plant components,
or the fuel price volatility and
security of supply concerns, generation
companies must make substantial investments
in an environment of increasing
uncertainty.

In particular, there is a renewed interest
globally in the development of new
nuclear power plants. Plants continue
to be built in parts of Asia and Central
Europe, while a resurgence of interest
is seen in North America and Europe.
Combined with the developing interest in
building clean coal facilities, the power
generation industry is facing a large
number of very complex development
projects.

A key constraint, however, being felt
worldwide is a severe and increasing
shortage of qualified technical personnel
to design, build and operate new generation
facilities. Additionally, as most of the
world’s existing nuclear fleet reaches the
end of its originally designed life span, relicensing
these nuclear plants to operate
another 10, 20, or even 30 years is taking
place globally.

Sowing Plant Information

iPLM can be thought of as lifecycle
management of information and data
about the plant assets (see Figure 1). It
also includes the use of this information
over the physical plant’s complete lifecycle
to minimize project and operational
risk, and optimize plant performance.

This information includes design
specifications, construction plans, component
and system operating instructions,
real-time and archived operating data,
as well as other information sources and
repositories. Traditionally, it has been difficult to manage all of this structured and
unstructured data in a consistent manner
across the plant lifecycle to create a single
version of the truth.

In addition, a traditional barrier has
existed between the engineering and
construction phases, and the operations
and maintenance phases (see Figure 2).
So even if the technical issues of interconnectivity
and data/information management
are resolved via an iPLM solution, it
is still imperative to change the business
processes associated with these domains
to take full advantage.

Benefits

iPLM combines benefits of a fully integrated
PLM environment with the connection
of an information repository and flow
of operational functions. These functions
include enterprise asset management
(EAM) systems. Specific iPLM benefits are:

  • Ability to accurately assess initial
    requirements before committing to
    capital equipment orders;
  • Efficient balance of owner requirements
    with best practices and regulatory compliance;
  • Performance design work and simulation
    as early as possible to ensure the
    plant can be built within schedule and
    budget;
  • Better project execution with real-time
    information that is updated automatically
    through links to business processes,
    tasks, documents, deliverables
    and other data sources;
  • Design and engineering multi-disciplinary
    components – from structure
    to electrical and fluid systems – to
    ensure the plant is built right the first
    time;
  • Ability to virtually plan how plants and
    structures will be constructed to minimize
    costly rework;
  • Optimization of operations and maintenance
    processes to reduce downtime
    and deliver long-term profits to the
    owners;
  • Ensuring compliance to regulatory and
    safety standards;
  • Maximizing design and knowledge
    reuse from one successful project to
    another;
  • Managing complexity, including sophisticated
    plant systems, and the interdependent
    work of engineering consultants,
    suppliers and the construction
    sites;
  • Visibility of evolving design and changing
    requirements to all stakeholders
    during new or retrofitting projects; and
  • Providing owners and operators a primary repository to all plant information
    and the processes that govern them
    throughout their lifecycle.

Benefits accrue at different times in the
plant lifecycle, and to different stakeholders.
They also depend heavily on the consistent
and dedicated implementation of
basic iPLM solution tenets.

Value Proposition

PLM solutions enable clients to optimize
the creation and management of complex
information assets over a projects’
complete lifecycle. Shipbuilding PLM, in
particular, offers an example similar to the
commercial nuclear energy generation
ecosystem. Defense applications, such as
nuclear destroyer and aircraft carrier platform
developments, are particularly good
examples.

A key aspect of the iPLM value proposition
is the seamless integration of data
and information throughout the design,
build, operate and maintain processes
for industrial plants. The iPLM concept is
well accepted by the commercial nuclear ecosystem. There is an understanding
by engineering companies, utilities and
regulators that information/data transparency,
information lifecycle management
and better communication throughout the
ecosystem is necessary to build timely,
cost effective, safe and publicly accepted
nuclear power plants.

iPLM leverages capabilities in PLM,
EAM and Electronic Content Management
(ECM), combined with data management/
integration, information lifecycle management,
business process transformation
and integration with other nuclear functional
applications through a Service Oriented
Architecture (SOA)-based platform.
iPLM can also provide a foundation on
which to drive high-performance computing
into commercial nuclear operations,
since simulation requires consistent valid,
accessible data sets to be effective.

A hallmark of the iPLM vision is that it
is an integrated solution in which information
related to the nuclear power plant
flows seamlessly across a complete and
lengthy lifecycle. There are a number of
related systems with which an iPLM solution
must integrate. Therefore, adherence
to industry standard interoperability and
data models is necessary for a robust
iPLM solution. An example of an appropriate
data model standard is known as ISO
15926, which has recently been developed
to facilitate data interoperability.

Combining EAM and PLM

Incorporating EAM with PLM is an
example of one of the key integrations
created by an iPLM solution. It provides
several benefits. This includes the basis
for a cradle-to-grave data and work
process repository for all information
applicable to a new nuclear power plant.
A single version of the truth becomes
available early in the project design, and
remains applicable in the construction,
start-up and test, and turnover phases of
the project.

Second, with the advent of single-stem
licensing in many parts of the world (consider
the COLA, or combined Construction
and Operating License Application
in the U.S.), licensing risk is considerably
reduced by consistent maintenance of plant information. Demonstrating that the
plant being started up is the same plant
that was designed and licensed becomes
more straightforward and transparent.

Third, using an EAM system during construction,
and incrementally incorporating
the deep functionality necessary for EAM
in the plant operations, can facilitate and
shorten the plant transfer period from the
designers and constructors to the owners
and operators.

Finally, the time and cost to build a new
plant is significant, and delay in connecting
the plant to the grid for the safe generation
of megawatts can easily cost millions
of dollars. The formidable challenges
of nuclear construction, however, may be
offset by an SOA-based integrated information
system, replacing the traditional
unique and custom designed applications.

To help address these challenges, the
power generation industry ecosystem –
including utilities, engineering companies,
reactor and plant designers, and regulators
– can benefit by looking at methodologies
and results from other industries that
have continued to design, build, operate
and maintain highly complex systems
throughout the last 10 to 20 years.

Here we examine what the shipbuilding
industry has done, results it achieved, and
where it is going.

Experiences In Shipbuilding

The shipbuilding industry has many
similarities to the development of a new
nuclear or clean coal plant. Both are very
complex, long lifecycle assets (35 to 70
years) which require precise and accurate
design, construction, operation and
maintenance to both fulfill their missions
and operate safely over their lifetimes. In
addition, the respective timeframe and
costs of designing and building these
assets (five to 10 years and $5 billion to
$10 billion) create daunting challenges
from a project management and control
point of view.

An example of a successful implementation
of an iPLM-like solution in the shipbuilding
industry is a project completed
for Northrop Grumman’s development of
the next generation of U.S. surface combat
ships, a four-year, $2.9 billion effort.
This was a highly complex, collaborative
project completed by IBM and Dassault
Systemes to design and construct a new
fleet of ships with a keen focus on supporting
efficient production, operation
and maintenance of the platform over its
expected lifecycle.

A key consideration in designing, constructing
and operating modern ships
is increasing complexity of the assets,
including advanced electronics, sensors
and communications. These additional
systems and requirements greatly multiply
the number of simultaneous constraints
that must be managed within the
design, considered during construction
and maintained and managed during
operations. This not only includes more
system complexity, but also adds to the
importance of effective collaboration, as
many different companies and stakeholders
must be involved in the ship’s overall
design and construction.

An iPLM system helps to enforce standardization,
enabling lean manufacturing
processes and enhancing producibility of
various plant modules. For information
technology architecture to continue to be
relevant over the ship’s lifecycle, it is paramount
that it be based on open standards
and adhere to the most modern software
and hardware architectural philosophies.

To provide substantive value, both for
cost and schedule, tools such as real-time
interference checking, advanced visualization,
early-validation and constructability
analysis are key aspects of an iPLM solution
in the ship’s early design cycle. For
instance, early visualization allows feedback
from construction, operations and
maintenance back into the design process
before it’s too late to inexpensively make
changes.

There are also iPLM solution benefits
for the development of future projects.
Knowledge reuse is essential for decreasing
costs and schedules for future units,
and for continuous improvement of
already built units. iPLM provides for
more predictable design and construction
schedules and costs, reducing risk for the
development of new plants.

It is also necessary to consider cultural
change within the ecosystem to reap the
full iPLM solution benefits. iPLM represents
a fundamentally different way of
collaborating and closing the loop between
the various parts of the ship development
and operation lifecycle. As such, people
and processes must change to take advantage
of the tools and capabilities. Without
these changes, much of the benefits of an
iPLM solution could be lost.

Here are some sample cost and schedule
benefits from Navy shipbuilding implementations
of iPLM: reduction of documentation
errors, 15 percent; performance
to schedule increase, 25 percent; labor
cost reduction for engineering analysis,
50 percent; change process cost and time
reduction, 15 percent; and error correction
cost reduction during production, 15
percent.

Conclusions

An iPLM approach to design, construction,
operation and maintenance of a
commercial nuclear power plant – while
requiring reactor designers, engineering
companies, owner/operators, and regulators
to fundamentally change the way
they approach these projects – has been
shown in other industries to have substantial
benefits related to cost, schedule and
long-term operation and maintainability.

By developing and delivering to the customer
two plants: the physical plant and
the “digital plant,” substantial advantages
will accrue both during plant construction
and operation. Financial markets, shareholders,
regulators and the general public
will have more confidence in the development
and operation of these plants
through the predictability, performance to
schedule and cost and transparency that
an iPLM solution can help provide.

Future of Learning

The nuclear power industry is facing significant employee turnover, which may be exacerbated by the need to staff new nuclear units. To maintain a highly skilled workforce to safely operate U.S. nuclear plants, the industry must find ways to expedite training and qualification, enhance knowledge transfer to the next generation of workers, and develop leadership talent to achieve excellent organizational effectiveness.

Faced with these challenges, the Institute of Nuclear Power Operations (INPO), the organization charged with promoting safety and reliability across the 65 nuclear electric generation plants operating in the U.S., created a “Future of Learning” initiative. It identified ways the industry can maintain the same high standard of excellence and record of nuclear safety, while accelerating training development, individual competencies and plant training operations.

The nuclear power industry is facing the perfect storm. Like much of the industrialized world, it must address issues associated with an aging workforce since many of its skilled workers and nuclear engineering professionals are hitting retirement age, moving out of the industry and beginning other pursuits.

Second, as baby boomers transition out of the workforce, they will be replaced by an influx of Generation Y workers. Many workers in this “millenials” generation are not aware of the heritage driving the single-minded focus on safety. They are asking for new learning models, utilizing the technologies which are so much a part of their lives.

Third, even as this big crew change takes place, there is increasing demand for electricity. Many are turning to cleaner technologies – solar, wind, and nuclear – to close the gap. And there is resurgence in requests for building new nuclear plants, or adding new reactors at existing plants. This nuclear renaissance also requires training and preparation to take on the task of safely and reliably operating our nuclear power plants.

It is estimated there will be an influx of 25,000 new workers in the industry over the next five years, with an additional 7,000 new workers needed if just a third of the new plants are built. Given that incoming workers are more comfortable using technology for learning, and that delivery models that include a blend of classroom-based, instructor-led, and Web-based methods can be more effective and efficient, the industry is exploring new models and a new mix of training.

INPO was created by the nuclear industry in 1979 following the Three Mile Island accident. It has 350 full-time and loaned employees. As a nonprofit organization, it is chartered to promote the highest levels of safety and reliability – in essence, to promote excellence – in the operation of nuclear electric generating plants. All U.S. nuclear operating companies are members.

INPO’s responsibilities include evaluating member nuclear site operations, accrediting each site’s nuclear training programs and providing assistance and information exchange. It has established the National Academy for Nuclear Training, and an independent National Nuclear Accrediting Board. INPO sends teams to sites to evaluate their respective training activities, and each station is reviewed at least every four years by the accrediting board.

INPO has developed guidelines for 12 specifically accredited programs (six operations and six maintenance/technical), including accreditation objectives and criteria. It also offers courses and seminars on leadership, where more than 1,500 individuals participate annually, from supervisors to board members. Lastly, it operates NANTeL (National Academy for Nuclear Training e-Learning system) with 200 courses for general employee training for nuclear access. More than 80,000 nuclear workers and sub-contractors have completed training over the Web.

The Future of Learning

In 2008, to systematically address workforce and training challenges, the INPO Future of Learning team partnered with IBM Workforce and Learning Solutions to conduct more than 65 one-on-one interviews, with chief executive officers, chief nuclear officers, senior vice presidents, plant managers, plant training managers and other leaders in the extended industry community. The team also completed 46 interviews with plant staff during a series of visits to three nuclear power plants. Lastly, the team developed and distributed a survey that was sent to training managers at the 65 nuclear plants, achieving a 62 percent response rate.

These are statements the team heard:

  • “Need to standardize a lot of the training, deliver it remotely, preferably to a desktop, minimize the ‘You train in our classroom in our timeframe’ and have it delivered more autonomously so it’s likely more compatible with their lifestyles.”
  • “We’re extremely inefficient today in how we design/develop and administer training. We don’t want to carry inefficiencies that we have today into the future.”
  • “Right now, in all training programs, it’s a one-size-fits-all model that’s not customized to an individual’s background. Distance learning would enable this by allowing people to demonstrate knowledge and let some people move at a faster pace.”
  • “We need to have ‘real’ e-learning. We’ve been exposed to less than adequate, older models of e-learning. We need to move away from ‘page turners’ and onto quality content.”

Several recommendations were generated as a result of the study. The first focused on ways to improve INPO’s current training offerings by adding leadership development courses, ratcheting up the interactivity of the Web-based and e-learning offerings in NANTeL and developing a “nuclear citizenship” course for new workers in the industry.

Second, there were recommendations about better utilizing training resources across the industry by centralizing common training, beginning with instructor training and certification and generic fundamentals courses. It was estimated that 50 percent of the accredited training materials are common across the industry. To accomplish this objective, INPO is exploring an industry infrastructure that would enable centralized training material development, maintenance and delivery.

The last set of recommendations focused on methods for better coordination and efficiency of training, including developing processes for certifying vendor training programs, and providing a jump-start to common community college and university curriculum.

In 2009, INPO is piloting a series of Future of Learning initiatives which will help determine the feasibility, cost-effectiveness, readiness and acceptance of this first set of recommendations. It is starting to look more broadly at ways it can utilize learning technology to drive economies of scale, accelerative and prescriptive learning, and deliver value to the nuclear electric generation industry.

Where Do We Go From Here ?

Beyond the initial perfect storm is another set of factors driving the future of learning.

First, consider the need for speed. It has been said that “If you are not learning at the speed of change, you are falling behind.”

In his “25 Lessons from Jack Welch,” the former CEO of General Electric said, “The desire, and the ability, of an organization to continuously learn from any source, anywhere – and to rapidly convert this learning into action – is its ultimate competitive advantage.” Giving individuals, teams and organizations the tools and technologies to accelerate and broaden their learning is an important part of the future of learning.

Second, consider the information explosion – the sheer volume of information available, the convenience of information access (due, in large part, to continuing developments in technology) and the diversity of information available. When there is too much information to digest, a person is unable to locate and make use of the information that one needs. When one is unable to process the sheer volume of information, overload occurs. The future of learning should enable the learner to sort through information and find knowledge.

Third, consider new developments in technology. Generations X and Y are considered “digital natives.” They expect that the most current technologies are available to them – including social networking, blogging, wikis, immersive learning and gaming – and to not have them is unthinkable.

Impact of New Technology

Philosophy of training has morphed from “just-in-case” (teach them everything and hope they will remember when they need it), to “just-in-time” (provide access to training just before the point of need), to “just-for-me.” With respect to the latter, learning is presented in a preferred media, with a learning path customized to reflect the student’s preferred learning style, and personalized to address the current and desired level of expertise within any given time constraint.

Imagine a scenario in which a maintenance technician at a nuclear plant has to replace a specialized valve – something she either hasn’t done for awhile, or hasn’t replaced before. In a Web 2.0 world, she should be able to run a query on her iPhone or similar handheld device and pull up the maintenance of that particular valve, access the maintenance records, view a video of the approved replacement procedure, or access an expert who could coach her through the process.

Learning Devices

What needs to be in place to enable this vision of the future of learning? First, workers will need a device that can access the information by connecting over a secure wireless network inside the plant. Second, the learning has to be available in small chunks – learning nuggets or learning assets. Third, the learning needs to be assembled along the dimensions of learning style, desired and target level of expertise, time available and media type, among other factors. Finally, experts need to be identified, tagged to particular tasks and activities, and made accessible.

Fortunately, some of the same learning technology tools that will enable centralized maintenance and accelerated development will also facilitate personalized learning. When training is organized at a more granular level – the learning asset level – not only can it be leveraged over a variety of courses and courseware, it can also be re-assembled and ported to a variety of outputs such as lesson books, e-learning and m-learning (mobile-learning).

The example above pointed out another shift in our thinking about learning. Traditionally, our paradigm has been that learning occurs in a classroom, and when it occurs, it has taken the form of a course. In the example above, the learning takes place anywhere and anytime, moving from the formal classroom environment to an informal environment. Of course, just because learning is “informal” does not mean it is accidental, or that it occurs without preparation.

Some estimates claim 10 percent of our learning is achieved through formal channels, 20 percent from coaching, and 70 percent through informal means. Peter Henschel, former director of the Institute for Research on Learning, raised an important question: If nearly three-quarters of learning in corporations is informal, can we afford to leave it to chance?

There are still several open issues regarding informal learning:

  • How do we evaluate the impact/effectiveness of informal learning? (Informal learning, but formal demonstration of competency/proficiency);
  • How do we record one’s participation and skill-level progression in informal learning? (Information learning, but formal recording of learning completion);
  • Who will create and maintain informal learning assets? (Informal learning, but formal maintenance and quality assurance of the learning content); and
  • When does informal learning need a formal owner (in a full- or part-time role)? (Informal learning, but will need formal policies to help drive and manage).
    • In the nuclear industry, accurate and up-to-date documentation is a necessity. As the nuclear industry moves toward more effective use of informal channels of learning, it will need to address these issues.

      Immersive Learning (Or Virtual Worlds)

      The final frontier for the future of learning is expansion into virtual worlds, also known as immersive learning. Although Second Life (SL) is the best known virtual world, there are also emerging competitors, including Active Worlds, Forterra (OLIVE), Qwag and Unisfair.

      Created in 2003 by Linden Lab of San Francisco, SL is a three-dimensional, virtual world that allows users to buy “property,” create objects and buildings and interact with other users. Unlike a game with rules and goals, SL offers an open-ended platform where users can shape their own environment. In this world, avatars do many of the same things real people do: work, shop, go to school, socialize with friends and attend rock concerts.

      From a pragmatic perspective, working in an immersive learning environment such as a virtual world provides several benefits that make it an effective alternative to real life:

      • Movement in 3-D space. A virtual world could be useful in any learning situation involving movement, danger, tactics, or quick physical decisions, such as emergency response.
      • Engendering Empathy. Participants experience scenarios from another person’s perspective. For example, the Future of Learning team is exploring ways to re-create the control room experience during the Three-Mile Island incident, to provide a cathartic experience for the next generation workforce so they can better appreciate the importance of safety and human performance factors.
      • Rapid Prototyping and Co-Design. A virtual world is an inexpensive environment for quickly mocking up prototypes of tools or equipment.
      • Role Playing. By conducting role plays in realistic settings, instructors and learners can take on various avatars and play those characters.
      • Alternate Means of Online Interaction. Although users would likely not choose a virtual world as their primary online communication tool, it provides an alternative means of indicating presence and allowing interaction. Users can have conversations, share note cards, and give presentations. In some cases, SL might be ideal as a remote classroom or meeting place to engage across geographies and utility boundaries.

      Robert Amme, a physicist at the University of Denver, has another laboratory in SL. Funded by a grant from the Nuclear Regulatory Commission, his team is building a virtual nuclear reactor to help train the next generation of environmental engineers on how to deal with nuclear waste (see Figure 1). The INPO Future of Learning team is exploring ways to leverage this type of learning asset as part of the nuclear citizenship initiative.

      There is no doubt that nuclear power generation is once again on an upswing, but critical to its revival and longevity will be the manner in which we prepare the current and next generation of workers to become outstanding stewards of a safe, effective, clean-energy future.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

Wind Energy: Balancing the Demand

In recent years, exponential demand for new U.S. wind energy-generating facilities has nearly doubled America’s installed wind generation. By the end of 2007, our nation’s total wind capacity stood at more than 16,000 megawatts (MW) – enough to power more than 4.5 million average American homes each year. And in 2007 alone, America’s new wind capacity grew 45 percent over the previous year – a record 5,244 MW of new projects and more new generating capacity than any other single electricity resource contributed in the same year. At the same time, wind-related employment nearly doubled in the United States during 2007, totaling 20,000 jobs. At more than $9 billion in cumulative investment, wind also pumped new life into regional economies hard hit by the recent economic downturn. [1]

The rapid development of wind installations in the United States comes in response to record-breaking demand driven by a confluence of factors: overwhelming consumer demand for clean, renewable energy; skyrocketing oil prices; power costs that compete with natural gas-fired power plants; and state legislatures that are competing to lure new jobs and wind power developments to their states. Despite these favorable conditions, the wind energy industry has been unable to meet America’s true demand for new wind energy-generating facilities. The barriers include the following: availability of key materials, the ability to manufacture large key components and the accessibility of skilled factory workers.

With the proper policies and related investments in infrastructure and workforce development, the United States stands to become a powerhouse exporter of wind power equipment, a wind technology innovator and a wind-related job creation engine. Escalating demand for wind energy is spurred by wind’s competitive cost against rising fossil fuel prices and mounting concerns over the environment, climate change and energy security.

Meanwhile, market trends and projections point to strong, continued demand for wind well into the future. Over the past decade, a similar surge in wind energy demand has taken place in the European Union (E.U.) countries. Wind power capacity there currently totals more than 50,000 MW, with projections that wind could provide at least 15 percent of the E.U.’s electricity by 2020 – amounting to an installed wind capacity of 180,000 MW and an estimated workforce of more than 200,000 people in wind power manufacturing, installation and maintenance jobs.

How is it, then, that European countries were able to secure the necessary parts and people while the United States fell short in its efforts on these fronts? After all, America has a bigger land mass and a larger, more high-quality wind resource than the E.U. countries. Indeed, the United States is already home to the world’s largest wind farms, including the 735-MW Horse Hollow Wind Energy Center in Texas, which generates power for about 230,000 average homes each year. What’s more, this country also has an extensive manufacturing base, a skilled labor pool and a pressing need to address energy and climate challenges.

So what’s missing? In short, robust national policy support – a prerequisite for strong, long-term investment in the sector. Such support would enable the industry to secure long lead-time materials and sufficient ramp-up to train and employ workers to continue wind power’s surging growth. Thus, the United States must rise to the occasion and assemble several key, interrelated puzzle pieces – policy, parts and people – if it’s to tap the full potential of wind energy.

POLICY: LONG-TERM SUPPORT AND INVESTMENT

In the United States, the federal government has played a key role in funding research and development, commercialization and large-scale deployment of most of the energy sources we rely on today. The oil and natural gas industry has enjoyed permanent subsidies and tax credits that date back to 1916 when Congress created the first tax breaks for oil and gas production. The coal industry began receiving similar support in 1932 with the passage of the first depletion allowances that enabled mining companies to deduct the value of coal removed from a mine from their taxable revenue.

Still in effect today, these incentives were designed to spur exploration and extraction of oil, gas and coal, and have since evolved to include such diverse mechanisms as royalty relief for resources developed on public lands; accelerated depreciation for investments in projects like pipelines, drilling rigs and refineries; and ongoing support for technology R&D and commercialization, such as the Department of Energy’s now defunct FutureGen program for coal research, its Deep Trek program for natural gas development and the VortexFlow SX tool for low-producing oil and gas wells.

For example, the 2005 energy bill passed by Congress provided more than $2 billion in tax relief for the oil and gas industry to encourage investment in exploration and distribution infrastructure. [2] The same bill also provided an expansion of existing support for coal, which in 2003 had a 10-year value of more than $3 billion. Similarly, the nuclear industry receives extensive support for R&D – the 2008 federal budget calls for more than $500 million in support for nuclear research – as well as federal indemnity that helps lower its insurance premiums. [3]

Over the past 15 years, the wind power industry has also enjoyed federal support, with a small amount of funding for R&D (the federal FY 2006 budget allotted $38 million for wind research) and the bulk of federal support taking the form of the Production Tax Credit (PTC) for wind power generation. The PTC has helped make wind energy more cost-competitive with other federally subsidized energy sources; just as importantly, its relatively routine renewal by Congress has created conditions under which market participants have grown accustomed to its effect on wind power finance.

However, in contrast to its consistent policies for coal, natural gas and nuclear power, Congress has never granted longterm approval to the wind power PTC. For more than a decade, in fact, Congress has failed to extend the PTC for longer than two years. And in three different years, the credit was allowed to expire with substantial negative consequences for the industry. Each year that the PTC has expired, major suppliers have had to, in the words of one senior wind power executive, “shut down their factories, lay off their people and go home.”

In 2000, 2002 and 2004, the expiration of the PTC sent wind development plummeting, with an almost complete collapse of the industry in 2000. If the PTC is allowed to expire at the end of 2008, American Wind Energy Associates (AWEA) estimates that as many as 75,000 domestic jobs could be lost as the industry slows production of turbines and power consumers reduce demand for new wind power projects.

The last three years have seen tenuous progress, with Congress extending the PTC for one and then two years; however, the wind industry is understandably concerned about these short-term extensions. Of significant importance is the corresponding effect a long-term or permanent extension of the PTC would have on the U.S. manufacturing sector and related investment activity. For starters, it would put the industry on an even footing with its competitors in the fossil fuels and nuclear industries. More importantly, it would send a clear signal to the U.S. manufacturing community that wind power is a solid, long-term investment.

PARTS: UNLEASHING THE NEXT MANUFACTURING BOOM

To fully grasp the trickle-down effects of an uncertain PTC on the wind power and related manufacturing industries, one must understand the industrial scale of a typical wind power development. Today’s wind turbines represent the largest rotating machinery in the world: a modern-day, 1.5-megawatt machine towers more than 300 feet above the ground with blades that out-span the wings of a 747 jetliner, and a typical utility-scale wind farm will include anywhere from 30 to 200 of these machines, planted in rows or staggered lines across the landscape.

The sheer size and scope of a utility-scale wind farm demands a sophisticated and established network of heavy equipment and parts manufacturers can fulfill orders in a timely fashion. Representing a familiar process for anyone who’s worked in a steel mill, forgery, gear-works or similar industrial facility, the manufacture of each turbine requires massive, rolled steel tubes for the tower; a variety of bearings and related components for lubricity in the drive shaft and hub; cast steel for housings and superstructure; steel forgings for shafts and gears; gearboxes for torque transmission; molded fiberglass, carbon fiber or hybrid blades; and electronic components for controls, monitoring and other functions.

U.S. manufacturers have extensive experience making all of these components for other end-use applications, and many have even succeeded in becoming suppliers to the wind industry. For example, Ameron International – a Pasadena, Calif.-based maker of industrial steel pipes, poles and related coatings – converted an aging heavy-steel fabrication plant in Fontana, Calif., to make wind towers. At 80 meters tall, 4.8 meters in diameter and weighing in at 200 tons, a wind tower requires large production facilities that have high up-front capital costs. By converting an existing facility, Ameron was able to capture a key and rapidly growing segment of the U.S. wind market in high-wind Western states while maintaining its position in other markets for its steel products.

Other manufacturers have also seen the opportunity that wind development presents and have taken similar steps. For example, Beaird Co. Ltd, a Shreveport, La.-based metal fabrication and machined parts manufacturer, supplies towers to the Midwest, Texas and Florida wind markets, as does DMI Industries from facilities in Fargo, N.D., and Tulsa, Okla.

But the successful conversion of existing manufacturing facilities to make parts for the wind industry belies an underlying challenge: investment in new manufacturing capacity to serve the wind industry is hindered by the lack of a clear policy framework. Even at wind’s current growth rates and with the resulting pent-up domestic demand for parts, the U.S. manufacturing sector is understandably reticent to invest in new production capacity.

The cause for this reticence is depicted graphically in Figure 1. With the stop-and-go nature of the PTC regarding U.S. wind development, and the consistent demand for their products in other end-use sectors, American manufacturers have strong disincentives to invest in new capital projects targeting the wind industry. It can take two to six years to build a new factory and 15 or more years to recapture the investment. The one- to two-year investment cycle of the U.S. wind industry is therefore only attractive to players who are comfortable with the risk and can manage wind as a marginal customer rather than an anchor tenant. This means that over the long haul, the United States could be legislating itself out of the “renewables” space, which arguably has a potential of several trillion dollars of global infrastructure.

The result in the marketplace: the United States ends up importing many of the large manufactured parts that go into a modern wind turbine – translating to a missed opportunity for domestic manufacturers that could be claiming a larger chunk of the underdeveloped U.S. wind market. As the largest consumer of electricity on earth, the United States also represents the biggest untapped market for wind power. At the end of 2007, with multiple successive years of 30 to 40 percent growth, wind power claimed just 1 percent of the U.S. electricity market. The raw potential for wind power in the United States is three times our total domestic consumption, according to the U.S. Energy Information Administration; if supply chain issues weren’t a problem, wind power could feasibly grow to supply as much as 20 to 30 percent of our $330 billion annual domestic electricity market. At 20 percent of domestic energy supply, the United States would need 300,000 MW of installed wind power capacity – an amount that would take 20 to 30 years of sustained manufacturing and development to achieve. But that would require growth well above our current pace of 4,000 to 5,000 MW annually – growth that simply isn’t possible given current supply constraints.

Of course, that’s just the U.S. market. Global wind development is set to more than triple by 2015, with cumulative installed capacity expected to rise from approximately 91 gigawatts (GW) by the end of 2007 to more than 290 GW by the end of 2015, according to forecasts by Emerging Energy Research (EER). Annual MW added for global wind power is expected to increase more than 50 percent, from approximately 17.5 GW in 2007 to more than 30 GW in 2015, according to EER’s forecasts. [4]

By offering the wind power industry the same long-term tax benefits enjoyed by other energy sources, Congress could trigger a wave of capital investment in new manufacturing capacity and turn the United States from a net importer of wind power equipment to a net exporter. But extending the PTC is not the final step: as much as any other component, a robust wind manufacturing sector needs skilled and dedicated people.

PEOPLE: RECLAIMING OUR MANUFACTURING ROOTS

In 2003, the National Association of Manufacturers released a study outlining many of the challenges facing our domestic manufacturing base. “Keeping America Competitive – how a Talent Shortage Threatens U.S. Manufacturing” highlights the loss of skilled manufacturing workers to foreign competitors, the problem of an aging workforce and a shift to a more urban, high tech economy and culture.

In particular, the study notes a number of “image” problems for the manufacturing industry. To wit: Among a geographically, ethnically and socio-economically diverse set of respondents – ranging from students, parents and teachers to policy analysts, public officials, union leaders, and manufacturing employees and executives – the sector’s image was found to be heavily loaded with negative connotations (and universally tied to the old “assembly line” stereotype) and perceived to be in a state of decline.

When asked to describe the images associated with a career in manufacturing, student respondents offered phrases such as “serving a life sentence,” being “on a chain gang” or a “slave to the line,” and even being a “robot.” Even more telling, most adult respondents said that people “just have no idea” of manufacturing’s contribution to the American economy.

The effect of this “sector fatigue” can be seen across the Rust Belt in the aging factories, retiring workforce and depressed communities being heavily impacted by America’s turn away from manufacturing. Wind power may be uniquely positioned to help reverse this trend. A growing number of America’s young people are concerned about environmental issues, such as pollution and global warming, and want to play a role in solving these problems. With the lure of good-paying jobs in an industry committed to environmental quality and poised for tremendous growth, wind power may provide an answer to manufacturers looking to lure and retain top talent.

We’ve already seen that you don’t need a large wind power resource in your state to enjoy the economic benefits of wind’s surging growth: whether it’s rolled steel from Louisiana and Oklahoma, gear boxes and cables from Wisconsin and New Hampshire, electronic components from Massachusetts and Vermont, or substations and blades from Ohio and Florida, the wind industry’s needs for manufactured parts – and the skilled labor that makes them – is massive, distributed and growing by the day.

UNLEASHING THE POWER OF EVOLUTION

The wind power industry offers a unique opportunity for revitalizing America’s manufacturing sector, creating vibrant job growth in currently depressed regions and tapping new export markets for American- made parts. For utilities and energy consumers, wind power provides a hedge against volatile energy costs and harvests one of our most abundant natural resources for energy security.

The time for wind power is now. As mankind has evolved, so too have our primary sources of energy: from the burning of wood and animal dung to whale oil and coal; to petroleum, natural gas and nuclear fuels; and (now) to wind turbines. The shift to wind power represents a natural evolution and progression that will provide both the United States and the world with critical economic, environmental and technological solutions. As energy technologies continue to evolve and mature, wind power will soon be joined by solar power, ocean current power and even hydrogen as cost-competitive solutions to our pressing energy challenges.

ENDNOTES

  1. “American Wind Energy Association 2007 Market Report” (January 2008). www.awea.org/Market_Report_Jan08.pdf
  2. Energy Policy Act of 2005, Section 1323-1329. www.citizen.org/documents/energyconferencebill0705.pdf
  3. Aileen Roder, “An Overview of Senate Energy Bill Subsidies to the Fossil Fuel Industry” (2003), Taxpayers for Common Sense website. www.taxpayer.net/greenscissors/LearnMore/senatefossilfuelsubsidies.htm
  4. “Report: global Wind Power Base Expected to Triple by 2015” (November 2007), North American Windpower. www.nawindpower.com/naw/e107_plugins/content/content_lt.php?content.1478