Infrastructure and the Economy

With utility infrastructure aging rapidly, reliability of service is threatened. Yet the economy is hurting, unemployment is accelerating, environmental mandates are rising, and the investment portfolios of both seniors and soon-to-retire boomers have fallen dramatically. Everyone agrees change is needed. The question is: how?

In every one of these respects, state regulators have the power to effect change. In fact, the policy-setting authority of the states is not only an essential complement to federal energy policy, it is a critical building block for economic recovery.

There is no question we need infrastructure development. Almost 26 percent of the distribution infrastructure owned and operated by the electric industry is at or past the end of its service life. For transmission, the number is approximately 15 percent, and for generation, about 23 percent. And that’s before considering the rising demand for electricity needed to drive our digital economy.

The new administration plans to spend hundreds of billions of dollars on infrastructure projects. However, most of the money will go towards roads, transportation, water projects and waste water systems, with lesser amounts designated for renewable energy. It appears that only a small portion of the funds will be designated for traditional central station generation, transmission and distribution. And where such funds are available, they appear to be in the form of loan guarantees, especially in the transmission sector.

The U.S. transmission system is in need of between $50 billion and $100 billion of new investment over the next 10 years, and approximately $300 billion by 2030. These investments are required to connect renewable energy sources, make the grid smarter, improve electricity market efficiency, reduce transmission-related energy losses, and replace assets that are too old. In the next three years alone, the investor-owned utility sector will need to spend about $30 billion on transmission lines.

Spending on distribution over the next decade could approximate $200 billion, rising to $600 billion by 2030. About $60 billion to $70 billion of this will be spent in just the next three years.

The need for investment in new generating stations is a bit more difficult to estimate, owing to the uncertainties surrounding the technologies that will prove the most economic under future greenhouse gas regulations and other technology preferences of the Congress and administration. However, it could easily be somewhere between $600 billion and $900 billion by 2030. Of this amount, between $100 billion and $200 billion could be invested over the next three years and as much as $300 billion over the next 10. It will be mostly later in that 10-year period, and beyond, that new nuclear and carbon-compliant coal capacity is expected to come on line in significant amounts. That will raise generating plant investments dramatically.

Jobs, and the Job of Regulators

All of this construction would maintain or create a significant number of jobs. We estimate that somewhere between 150,000 and 300,000 jobs could be created annually by this build out, including jobs related to construction, post-construction utility operating positions, and general economic "ripple effect" jobs through 2030.

These are sustainable levels of employment – jobs every year, not just one-time surges.

In addition, others have estimated that the development of the smart grid could add between 150,000 and 280,000 jobs. Clearly, then, utility generation, transmission and distribution investments can provide a substantial boost for the economy, while at the same time improving energy efficiency, interconnecting critical renewable energy sources and making the grid smarter.

The beauty is that no federal legislation, no taxpayer money and no complex government grant or loan processes are required. This is virtually all within the control of state regulators.

Timely consideration of utility permit applications and rate requests, as well as project pre-approvals by regulators, allowance of construction work in progress in rate base, and other progressive regulatory practices would vastly accelerate the pace at which these investments could be made and financed, and new jobs created. Delays in permitting and approval not only slow economic recovery, but also create financial uncertainty, potentially threatening ratings, reducing earnings and driving up capital costs.

Helping Utility Shareholders

This brings us to our next point: Regulators can and should help utility shareholders. Although they have a responsibility for controlling utility rates charged to consumers, state regulators also need to provide returns on equity and adopt capital structures that recognize the risks, uncertainties and investor expectations that utilities face in today’s and tomorrow’s very de-leveraged and uncertain financial markets.

It is now widely acknowledged that risk has not been properly priced in the recent past. As with virtually all other industries, equity will play a far more critical role in utility project and corporate finance than in the past. For utilities to attract the equity needed for the buildout just described, equity must earn its full, risk-adjusted return. This requires a fresh look at stockholder expectations and requirements.

A typical utility stockholder is not some abstract, occasionally demonized, capitalist, but rather a composite of state, city, corporate and other pension funds, educational savings accounts, individual retirement accounts and individual shareholders who are in, or close to, retirement. These shares are held largely by, or for the benefit of, everyday workers of all types, both employed and retired: government employees, first responders, trades and health care workers, teachers, professionals, and other blue and white collar workers throughout the country.

These people live across the street from us, around the block, down the road or in the apartments above and below us. They rely on utility investments for stable income and growth to finance their children’s education, future home purchases, retirement and other important quality-of-life activities. They comprise a large segment of the population that has been injured by the economy as much as anyone else.

Fair public policy suggests that regulators be mindful of this and that they allow adequate rates of return needed for financial security. It also requires that regulatory commissions be fair and realistic about the risk premiums inherent in the cost of capital allowed in rate cases.

The cost of providing adequate returns to shareholders is not particularly high. Ironically, the passion of the debate that surrounds cost of capital determinations in a rate case is far greater than the monetary effect that any given return allowance has on an individual customer’s bill.

Typically, the differential return on equity at dispute in a rate case – perhaps between 100 and 300 basis points – represents between 0.5 and 2 percent of a customer’s bill for a "wires only" company. (The impact on the bills of a vertically integrated company would be higher.) Acceptance of the utility’s requested rate of return would no doubt have a relatively small adverse effect on customers’ bills, while making a substantial positive impact on the quality of the stockholders’ holdings. Fair, if not favorable, regulatory treatment also results in improved debt ratings and lower debt costs, which accrue to the benefit of customers through reduced rates.

The List Doesn’t Stop There

Regulators can also be helpful in addressing other challenges of the future. The lynchpin of cost-effective energy and climate change policy is energy efficiency (EE) and demand side management (DSM).

Energy efficiency is truly the low-hanging fruit, capable of providing immediate, relatively inexpensive reductions in emissions and customers bills. However, reductions in customers’ energy use runs contrary to utility financial interests, unless offset by regulatory policy that removes the disincentives. Depending upon the particulars of a given utility, these policies could include revenue decoupling and the authorization of incentive – or at least fully adequate – returns on EE, DSM and smart grid investments, as well as recovery of related expenses.

Additional considerations could include accelerated depreciation of EE and DSM investments and the approval of rate mechanisms that recover lost profit margins created by reduced sales. These policies would positively address a host of national priorities in one fell swoop: the promotion of energy efficiency, greenhouse gas reduction, infrastructure investment, technology development, increased employment and, through appropriate rate base and rate of return policy, improved stockholder returns.

The Leadership Opportunity

Oftentimes, regulatory decision making is narrowly focused on a few key issues in isolation, usually in the context of a particular utility, but sometimes on a statewide generic basis. Rarely is state regulatory policy viewed in a national context. Almost always, issues are litigated individually in high partisan fashion, with little integration as part of a larger whole where utility shareholder interests are usually underrepresented.

The time seems appropriate – and propitious – for regulators to lead the way to a major change in this paradigm while addressing the many urgent issues that face our nation. Regulators can make a difference, probably far beyond that for which they presently give themselves credit.

The Smart Grid in Malta

On the Mediterranean island of Malta, with a population of about 400,000 people on a land mass of just over 300 square kilometers, power, water and the economy are intricately linked. The country depends on electrically powered desalination plants for over half of its water supply. In fact, about 75 percent of the cost of water from these plants on Malta is directly related to energy production. Meanwhile, rising sea levels threaten Malta’s underground freshwater source.

Additionally, in line with the Lisbon strategy and the other European countries, the government of Malta has set an objective of transforming the island into a competitive knowledge economy to encourage investment by foreign companies. Meeting all of these goals in a relatively short period of time presents a complex, interconnected series of challenges that require immediate attention to ensure the country has a sustainable and prosperous future.

In light of this need, the Maltese National Utilities for Electricity and Water – Enemalta Corp. (EMC) and Water Services Corp. (WSC) – reached a partnership agreement with IBM to undertake a complete transformation of its distribution networks to improve operational efficiency and customer service levels. IBM will replace all 250,000 electricity meters with new devices, and connect these and the existing water meters to advanced information technology applications. This will enable remote reading, management and monitoring throughout the entire distribution network.

This solution will be integrated with new back-office applications for finance, billing and cash processes, as well as an advanced analytics tool to transform sensor data into valuable information supporting business decisions and improving customer service levels. It will also include a portal to enable closer interaction with – and more engagement by – the end consumers.

Why are the utility companies in Malta making such a significant investment to reshape their operations? To explore this question, it helps to start with a broader look at smart grid projects to see how they create benefits – not just for the companies making the investment, but for the local community as well.

Smart Grid Benefits

A case is often made that basic operational benefits of a smart grid implementation can be achieved largely through an Advanced Metering Infrastructure (AMI) implementation, which yields real-time readings for use in billing cycles, reduced operational cost in the low voltage network and more control over theft and fraud. In this view, the utility’s operational model is further transformed to improve customer relationship management through the introduction of flexible tariffs, remote customer connection/disconnection, power curtailment options and early outage identification through low voltage grid monitoring.

But AMI extended to a broader smart grid implementation has the potential to achieve even greater strategic benefits. One can see this by simply considering the variety of questions about the impact of the carbon footprint of human activity on the climate and other environmental factors. What is a realistic tradeoff between energy consumption, energy efficiency and economic and political dependencies on the local, national and international levels? Which energy sources will be most effective with such tradeoffs? To what extent can smaller, renewable resources replace today’s large, fossil-based power sources? Where this is possible, how can hundreds or thousands of dispersed, independently operated generators be effectively monitored?

Ultimately, distribution networks need to be smart enough to distinguish among today’s large-scale utility generators; customers producing solar energy for their own needs who are virtually disconnected from the grid; those using a wind power generator and injecting the surplus back into the grid; and end-use customers requiring marginal or full supply. An even more dispersed model for distributed generation will emerge once electric vehicles circulate in towns, placing complex new demands on the grid while offering the benefit of new storage capabilities to the network.

Interdependence

Together, water and power distributors, transmission operators, generators, market regulators and final customers will interact in a much more complex, interconnected and interdependent world. This is especially true in a densely populated, modern island ecosystem, where the interplay of electricity, water, gas, communications and other services is magnified.

These points of intersection take numerous shapes. For example, on a national scale, water and sewer services can consume a large portion of the available energy supply. Water service, which is essential to customer quality of life, also presents distribution issues that are similar in many ways to those embedded in the electric grid. At a more local scale, co-generation and micro-CHP generation plants make the interdependency of electricity and gas more visible. Furthermore, utilities’ experience at providing centrally managed services that afford comfort and convenience makes the provision of additional services – communication, security, and more – imaginable. But how to make these interconnections effective contributors to quality of life raises real economic questions. Is it sensible to make an overarching investment in multiple services? How can this drive increased operational efficiency and bring new benefits to customers? Can a clear return on investment be demonstrated to investors and bill payers?

Malta is an example of an island that operates a vertically integrated and isolated electricity system. Malta has no connections with the European electricity grid and no gas pipelines to supply its generators. In the current configuration of the energy infrastructure, all of its demand must be fulfilled by the two existing power plants, which generate power using entirely imported fossil fuel. Because of these limitations on supply, and dependencies on non-native resources, electricity distribution must be extremely efficient, limiting any loss of energy as much as possible. Both technical and commercial losses must be kept fully under control, and theft must be effectively eliminated from the system to avoid unfair social accounting and to ensure proper service levels to all customers.

Estimates of current economic losses in Malta are in the millions of Euros for just the non-technical losses. At these levels, and with limited generation capacity, quality of service and ability to satisfy demand at all times is threatened. Straining the system even further is the reality that Malta, without significant natural water sources, must rely on a seawater purification process to supply water to its citizens. This desalinization process absorbs roughly one-third of the annual power consumption on the island.

But the production process is not the only source of interdependency of electricity and water as the distribution principles of each have strong ties. In most locations in the world, electricity and water distribution have opposing characteristics that allow them to enjoy some symbiotic benefits. Electricity cannot be effectively stored, so generation needs to match and synchronize in time with demand. Water service generally has the opposite characteristic: in fact, it can be stored so easily that it is frequently stored as pre-generation capacity in hydro generation.

But on an island like Malta, this relationship is turned on its head. There is no natural water to store, and once produced, purified water should be consumed rather quickly. If it is produced in excess, then reservoir evaporation and pipeline losses can affect the desalinization effort and the final efficiency of the process. So in Malta, unlike much of the rest of the world, water providers tend to view customer demand in a similar way as electricity providers, and the demand profiles are unable to support each other as they can elsewhere.

These are qualitative observations. But if electricity and water networks can be monitored, and real-time data supplied, providers can begin to assess important questions regarding operational and financial optimization of the system, which will, among other benefits, improve reliability and service quality and keep costs low.

Societal Implications

An additional issue the government of Malta faces is its effort to ensure that the population has a sufficient and diverse educational and technical experience base. When a company is attracted to invest in Malta, it benefits from finding local natives with appropriate skills to employ; costs increase if too many foreign nationals must be brought in to operate the company. Therefore, pervasive education on information and communication technology-related topics is a priority for the government, aimed at young students, as well as adult citizens.

Therein lies a further – but no less important – benefit of bringing a smart grid to Malta. Energy efficiency campaigns supported by smart meters will not only help its citizens control consumption behavior and make more efficient and effective electricity and water operations a reality, but they will prove to be a project that helps raise the island’s technology culture in a new dimension. Meter installers will deal with palmtop and other advanced IT applications, learning to connect the devices not only to the physical electrical infrastructure, but also to the embedded information infrastructure. From smart home components to value-added services, commercial and industrial players will look to new opportunities that leverage the smart grid infrastructure in Malta as well, adding highly skilled jobs and new businesses to the Maltese economy.

Benefits will expand down to the elementary education levels as well. For example, it will be possible for schools to visit utility demonstration centers where the domestic meter can be presented as an educational tool. This potential includes making energy efficiency a door to educational programs on responsible citizenship, science, mathematics, environmental sustainability and many other key learning areas. Families will find new incentive to become familiar with the Internet as they connect to the utility’s website to control their energy bill and investigate enhanced tariffs for more cost-effective use of basic services.

Conclusion

Malta is famed for its Megalithic Temples – the oldest free-standing buildings in Europe, older than the Pyramids of Egypt [1]. But with its smart grid project, it stands to be the home of one of the newest and most advanced infrastructure projects as well. The result of the Maltese smart grid effort will be an end-to-end electricity and water transmission and distribution system. It will not only enable more efficient consumption of energy and water, but will completely transform the relationship of Maltese consumers with the utilities, while enhancing their education and employment prospects. These benefits go well beyond the traditional calculation of benefits of, for example, a simple AMI-focused project, and demonstrate that a smart grid project in an island environment can go well beyond simply improving utility operations. It can transform the entire community in ways that will improve the quality of life in Malta for generations to come.

Reference:

  1. 1 The Bradshaw Foundation, 2009

An Australian Approach to Energy Innovation and Collaboration

Just as global demand for energy is
steadily increasing, so too, are the
recognized costs of power generation.
A recent report about the possibility
of creating a low-emissions future by Australia’s
Treasury noted that electricity production
currently accounts for 34 percent
of the nation’s net greenhouse gas emissions,
and that it was the fastest-growing
contributor to greenhouse gas emissions
over the period from 1990 to 2006 [1].

This growing realization of the true
cost of energy production will be brought
into stark relief, with the likely implementation
of a national emissions trading
scheme in 2010.

Australia’s energy producers are entering
an era of great change, with increasing
pressure to drive efficiencies in both the
supply and demand sides of their businesses.
These pressures manifest themselves
in the operation of energy and utilities
organizations in three basic needs:

  • To tighten the focus on delivering value,
    within the paradigm of achieving more
    with less, and while concentrating on
    their core business;
  • To exploit the opportunities of an industry
    in transformation, and to build new
    capabilities; and
  • To act with speed in terms of driving
    leadership, setting the agenda, managing
    change and leveraging experience
    – all while managing risk.

The net effect of the various government
initiatives and mandates around energy
production is to drive energy and utility
companies to deliver power more responsibly
and efficiently. The most obvious
evidence of this reaction is the development
of advanced metering infrastructure
(AMI) and intelligent network (IN) programs
across Australia. Yet a more fundamental
change is also starting to emerge – a
change that is leading companies to work
more openly and collaboratively toward a
smarter energy value chain.

This renewed sense of purpose gives
energy and utilities organizations an opportunity
to think and act in dynamic new ways
as they re-engineer their operations to:

  • Transform the grid from a rigid, analog
    system to a responsive and automated
    energy delivery system by driving operational
    excellence;
  • Empower consumers and improve their
    satisfaction by providing them with near
    real-time, detailed information about
    their energy usage; and
  • Reduce greenhouse gas emissions to
    meet or exceed environmental regulatory
    requirements while maintaining a
    sufficient, cost-effective power supply.

A Global Issue

In Australia, Country Energy, a leading
essential services corporation owned by
the New South Wales Government, is leading
the move to change not just its own
organization, but the entire electricity
supply industry.

With the strength of around 4,000
employees, and Australia’s largest power
supply network covering 95 percent of
New South Wales’ landmass, Country
Energy recognized the scale and scope of
this industry challenge meant no single
player could find all the answers by himself.

A Powerful Alliance

Formed by IBM, the Global Intelligent
Utilities Network (IUN) Coalition represents
a focused and collaborative effort
to address the many economic, social and
environmental pressures facing these
organizations as they shape, accelerate
and share in the development of the
smart grid. Counting just one representative
organization from each major urban
electricity market, the coalition will collaborate
to enable the rapid development of solutions, adoption of open industry-based
standards, and creation of informed
policy and regulation.

Not only does the coalition believe
these three streams of collaboration will
help drive the adoption of the IUN, or
smart grid, in markets across the planet,
but the sharing of best practice information
and creation of a unified direction for
the industry will help reduce regulatory,
financial, market and implementation
risks. And, like all productive collaborative
relationships, the rewards for individual
members are likely to become amplified as
the group grows, learns and shares.

Global Coalition, Local Results

As Australia’s only member of the coalition,
Country Energy has been quick to
capitalize on – and contribute to – the
benefits of the global knowledge base,
adapting the learnings from overseas
operators in both developed and emerging
markets, and applying them to the unique
challenges of a huge landmass with a
decentralized population.

From its base in a nation rich in natural
resources, the Australian energy and utilities
industry is quickly moving to adapt to
the emergence of a carbon economy.

One of Country Energy’s key projects in
this realm is the development of its own
Intelligent Network (IN), providing the
platform for developing its future network
strategy, incorporating distributed generation
and storage, as well as enabling consumer
interaction through the provision of
real-time information on energy consumption,
cost and greenhouse footprint.

Community Collaboration

Keen to understand how the IN will work
for customers and its own employees,
Country Energy is moving the smart grid
off the page and into real life.

Designed to demonstrate, measure and
evaluate the technical and commercial
viability of IN initiatives, two communities
have been identified by Country Energy,
with the primary goal of learning from
both the suitability of the solutions implemented
and the operational partnership
models by which they will be delivered.

These two IN communities are intended
to provide a live research environment
to evaluate current understandings and
technologies, and will include functionality
across nine areas, including smart meters,
electrical network monitoring and control,
and consumer interaction and response.

Demonstrating the Future

In preparing to put the digital age to
work, and to practically demonstrate to
stakeholders what an IN will deliver, Country
Energy has developed Australia’s first
comprehensive IN Research and Demonstration
Centre near Canberra.

This interactive centre shows what the power network of the not-too-distant
future will look like and how it will
change the way power is delivered, managed
and used.

The centre includes a residential setting
to demonstrate the “smart home of
the future,” while giving visitors a preview
of an energy network that automatically
detects where a power interruption
occurs, providing up-to-date information
to network operators and field crews.

An initiative as far-reaching as the IN will
rely on human understanding as much as it
does on technology and infrastructure.

Regional Delivery Model

In addition to the coalition, IBM and
Country Energy developed and implemented
an innovative new business model
to transform Country Energy’s application
development and support capability. In
2008, Country Energy signed a four-year
agreement with IBM to establish a regional development centre, located in
the city of Bathurst.

The centre is designed to help maximize
cost efficiencies, accelerate the pace of
skills transfer through close links with the
local higher-education facility, Charles
Sturt University, and support Country
Energy’s application needs as it moves
forward on its IN journey. The centre is also
providing services to other IBM clients.

Through the centre, Country Energy
aims to improve service levels and innovations
delivered to its business via skills
transfer to Country Energy. The outcome
also allows Country Energy to meet its
commitment to support regional areas
and offers a viable alternative to global
delivery models.

Looking to the Future

In many ways, the energy and utilities
industry has come to symbolize the crossroads
that many of the planet’s systems find themselves at this moment in time:
legacy systems are operating in an economic
and environmental ecosystem that
is simply unable to sustain current levels –
let alone, the projected demands of global
growth.

Yet help is at hand, infusing these systems
with the instrumentation to extract
real-time data from every point in the
value chain, interconnecting these points
to allow the constant, back-and-forward
fl ow of information, and finally, employing
the power of analytics to give these systems
the gift of intelligence.

In real terms, IBM and Country Energy
are harnessing the depth of knowledge
and expertise of the Global IUN Coalition,
collaborating to help change the way the
industry operates at a fundamental level
in order to create an IN. This new smart
grid will operate as an automated energy
delivery system, empowering consumers
and improving their satisfaction by providing
them with near real-time, detailed
information about their energy usage.

And for the planet that these consumers
– and billions of others – rely upon,
Country Energy’s efforts will help reduce
greenhouse gas emissions while maintaining
that most basic building block of
human development: safe, dependable,
available and cost-effective power.

Reference

  1. 1 Commonwealth of Australia. Commonwealth
    Treasury. Australia’s Low Pollution
    Future: The Economics of Climate
    Change Mitigation. 30 October 2008.

Author’s Note: This customer story is based
on information provided by Country Energy
and illustrates how one organization uses IBM
products. Many factors have contributed to
the results and benefits described. IBM does
not guarantee comparable results elsewhere.

Power and Patience

The U.S. utility industry – particularly the electric-producing branch of it, there also are natural gas and water utilities – has found itself in a new, and very uncomfortable, position. Throughout the first quarter of 2009 it was front and center in the political arena.

Politics has been involved in the U.S. electric generation and distribution industry since its founding in the late 19th Century by Thomas Edison. Utilities have been regulated entities almost since the beginning and especially after the 1930s when the federal government began to take a much greater role in the direction and regulation of private enterprise and national economics.

What is new as we are about to enter the second decade of the 21st Century is that not only is the industry being in large part blamed for a newly discovered pollutant, carbon dioxide, which is naturally ubiquitous in the Earth’s atmosphere, but it also is being tasked with pulling the nation out of its worst economic recession since the Great Depression of the 1930s. Oh, and in your spare time, electric utilities, enable the remaking of the automobile industry, eliminate the fossil fuels which you have used to generate ubiquitous electricity for 100 years, and accomplish all this while remaining fiscally sound and providing service to all Americans. Finally, please don’t make electricity unaffordable for the majority of Americans.

It’s doubtful that very many people have ever accused politicians of being logical, but in 2009 they seem to have decided to simultaneously defy the laws of physics, gravity, time, history and economics. They want the industry to completely remake itself, going from the centralized large-plant generation model created by Edison to widely dispersed smaller-generation; from fossil fuel generation to clean “renewable” generation; from being a mostly manually controlled and maintained system to becoming a self-healing ubiquitously digitized and computer-controlled enterprise; from a marginally profitable (5-7 percent) mostly privately owned system to a massive tax collection system for the federal government.

Is all this possible? The answer likely is yes, but in the timeframe being posited, no.

Despite political co-option of the terms “intelligent utility” and “smart grid” in recent times, the electric utility industry has been working in these directions for many years. Distribution automation (DA) – being able to control the grid remotely – is nothing new. Utilities have been working on DA and SCADA (supervisory control and data acquisition) systems for more than 20 years. They also have been building out communications systems, first analog radio for dispatching service crews to far-flung territories, and in recent times, digital systems to reach all of the millions of pieces of equipment they service. The terms themselves were not invented by politicians, but by utilities themselves.

Prior to 2009, all of these concepts were under way at utilities. WE Energies has a working “pod” of all digital, self-healing, radial-designed feeders that works. The concept is being tried in Oklahoma, Canada and elsewhere. But the pods are small and still experimental. Pacific Gas and Electric, PEPCO and a few others have demonstration projects of “artificial intelligence” on the grid to automatically switch power around outages. TVA and several others have new substation-level servers that allow communications with, data collection from and monitoring of IEDs (Intelligent electrical devices) while simultaneously providing a “view” into the grid from anywhere else in the utility, including the boardroom. But all of these are relatively small-scale installations at this point. To distribute them across the national grid is going to take time and a tremendous amount of money. The transformation to a smart grid is under way and accelerating. However, to this point, the penetration is relatively small. Most
of the grid still is big and dumb.

Advanced metering infrastructure (AMI) actually was invented by utilities, although vendors serving the industry have greatly advanced the art since the mid-1990s. Utilities installed earlier-generation AMI, called automated meter reading (AMR) for about 50 percent of all customers, although the other 50 percent still were being read by meter readers traipsing through people’s yards.

AMI, which allows two-way communications with the meters (AMR is mostly one-way), is advancing rapidly, but still has reached less than 20 percent of American homes, according to research by AMI guru Howard Scott and Sierra Energy Group, the research and analysis division of Energy Central. Large-scale installations by Southern Company, Pacific Gas and Electric, Edison International and San Diego Gas and Electric, are pushing that percentage up rapidly in 2009, and other utilities were in various stages of pilots. The first installation of a true two-way metering system was at Kansas City Power & Light Co. (now Great Plains Energy) in the mid-1990s.

So the intelligent utility and smart grid were under development by utilities before politicians got into the act. However, the build-out was expected to take perhaps 30 years or more before completed down to the smallest municipal and co-operative utilities. Many of the smaller utilities haven’t even started pilots. Xcel Energy, Minneapolis, is building a smartgrid model in one city, Boulder, Col., but by May, 2009, two of the primary architects of the effort, Ray Gogel and Mike Carlson, had left Xcel. Austin Energy has parts of a smart grid installed, but it still reaches only a portion of Austin’s population and “home automation” reaches an even smaller proportion.

There are numerous “paper” models existent for these concepts. One, developed by Sierra Energy Group more than three years ago, is shown in Figure 1.

Major other portions of what is being envisioned by politicians have yet to be invented or developed. There is no reasonably priced, reasonably practical electric car, nor standardized connection systems to re-charge them. There are no large-scale transmission systems to reach remote windmill farms or solar-generating facilities and there is large-scale resistance from environmentalists to building such transmission facilities. Despite some political pronouncements, renewable generation, other than hydroelectric dams, still produces less than 3 percent of America’s electricity and that percentage is climbing very slowly.

Yes, the federal government was throwing some money at the build-out in early 2009, about $4 billion for smart grid and some $30-$45 billion at renewable energy. But these are drops in the bucket to the amount of money – estimated by responsible economists at $3 trillion or more – required just to build and replace the aging transmission systems and automate the grid. This is money utilities don’t have and can’t get without making the cost of electricity prohibitive for a large percentage of the population. Despite one political pronouncement, windmills in the Atlantic Ocean are not going to replace coal-fired generation in any conceivable time frame, certainly not in the four years of the current administration.

Then, you have global warming. As a political movement, global warming serves as a useful stick to the carrot of federal funding for renewable energy. However, the costs for the average American of any type of tax on carbon dioxide are likely to be very heavy.

In the midst of all this, utilities still have to go to public service commissions in all 50 states for permission to raise rates. If they can’t raise rates – something resisted by most PSCs – they can’t generate the cash to pay for this massive build-out. PSC commissioners also are politicians, by the way, with an average tenure of only about four years, which is hardly long enough to learn how the industry works, much less how to radically reconfigure it in a similar time-frame.

Despite a shortage of engineers and other highly skilled workers in the United States, the smart grid and intelligent utilities will be built in the U.S. But it is a generational transformation, not something that can be done overnight. To expect the utility industry to gear up to get all this done in time to “pull us out” of the most serious recession of modern times just isn’t realistic – it’s political. Add to the scale of the problem political wrangling over every concept and every dollar, mix in a lot of government bureaucracy that takes months to decide how to distribute deficit dollars, and throw in carbon mitigation for global warming and it’s a recipe for disaster. Expect the lights to start flickering along about…now. Whether they only flicker or go out for longer periods is out of the hands of utilities – it’s become a political issue.

Managing the Plant Data Lifecycle

Intelligent Plant Lifecycle Management
(iPLM) is the process of managing a
generation facility’s data and information
throughout its lifetime – from initial
design through to decommissioning. This
paper will look at results from the application
of this process in other industries
such as shipbuilding, and show how those
results are directly applicable to the
design, construction, operation and maintenance
of complex power generation
facilities, specifically nuclear and clean
coal plants.

In essence, iPLM can unlock substantial
business value by shortening plant development
times, and efficiently finding,
reusing and changing plant data. It also
enables an integrated and transparent
collaborative environment to manage
business processes.

Recent and substantial global focus on
greenhouse gas emissions, coupled with rising and volatile fossil fuel prices, rapid
economic growth in nuclear-friendly Asian
countries, and energy security concerns,
is driving a worldwide resurgence in commercial
nuclear power interest.

The power generation industry is
undergoing a global transformation that
is putting pressure on traditional methods
of operation, and opening the door to substantial
innovation. Due to factors such
as the transition to a carbon-constrained
world, which greatly affects a generation
company’s portfolio mix decisions, the
escalating constraints in the global supply
chain for raw materials and key plant components,
or the fuel price volatility and
security of supply concerns, generation
companies must make substantial investments
in an environment of increasing
uncertainty.

In particular, there is a renewed interest
globally in the development of new
nuclear power plants. Plants continue
to be built in parts of Asia and Central
Europe, while a resurgence of interest
is seen in North America and Europe.
Combined with the developing interest in
building clean coal facilities, the power
generation industry is facing a large
number of very complex development
projects.

A key constraint, however, being felt
worldwide is a severe and increasing
shortage of qualified technical personnel
to design, build and operate new generation
facilities. Additionally, as most of the
world’s existing nuclear fleet reaches the
end of its originally designed life span, relicensing
these nuclear plants to operate
another 10, 20, or even 30 years is taking
place globally.

Sowing Plant Information

iPLM can be thought of as lifecycle
management of information and data
about the plant assets (see Figure 1). It
also includes the use of this information
over the physical plant’s complete lifecycle
to minimize project and operational
risk, and optimize plant performance.

This information includes design
specifications, construction plans, component
and system operating instructions,
real-time and archived operating data,
as well as other information sources and
repositories. Traditionally, it has been difficult to manage all of this structured and
unstructured data in a consistent manner
across the plant lifecycle to create a single
version of the truth.

In addition, a traditional barrier has
existed between the engineering and
construction phases, and the operations
and maintenance phases (see Figure 2).
So even if the technical issues of interconnectivity
and data/information management
are resolved via an iPLM solution, it
is still imperative to change the business
processes associated with these domains
to take full advantage.

Benefits

iPLM combines benefits of a fully integrated
PLM environment with the connection
of an information repository and flow
of operational functions. These functions
include enterprise asset management
(EAM) systems. Specific iPLM benefits are:

  • Ability to accurately assess initial
    requirements before committing to
    capital equipment orders;
  • Efficient balance of owner requirements
    with best practices and regulatory compliance;
  • Performance design work and simulation
    as early as possible to ensure the
    plant can be built within schedule and
    budget;
  • Better project execution with real-time
    information that is updated automatically
    through links to business processes,
    tasks, documents, deliverables
    and other data sources;
  • Design and engineering multi-disciplinary
    components – from structure
    to electrical and fluid systems – to
    ensure the plant is built right the first
    time;
  • Ability to virtually plan how plants and
    structures will be constructed to minimize
    costly rework;
  • Optimization of operations and maintenance
    processes to reduce downtime
    and deliver long-term profits to the
    owners;
  • Ensuring compliance to regulatory and
    safety standards;
  • Maximizing design and knowledge
    reuse from one successful project to
    another;
  • Managing complexity, including sophisticated
    plant systems, and the interdependent
    work of engineering consultants,
    suppliers and the construction
    sites;
  • Visibility of evolving design and changing
    requirements to all stakeholders
    during new or retrofitting projects; and
  • Providing owners and operators a primary repository to all plant information
    and the processes that govern them
    throughout their lifecycle.

Benefits accrue at different times in the
plant lifecycle, and to different stakeholders.
They also depend heavily on the consistent
and dedicated implementation of
basic iPLM solution tenets.

Value Proposition

PLM solutions enable clients to optimize
the creation and management of complex
information assets over a projects’
complete lifecycle. Shipbuilding PLM, in
particular, offers an example similar to the
commercial nuclear energy generation
ecosystem. Defense applications, such as
nuclear destroyer and aircraft carrier platform
developments, are particularly good
examples.

A key aspect of the iPLM value proposition
is the seamless integration of data
and information throughout the design,
build, operate and maintain processes
for industrial plants. The iPLM concept is
well accepted by the commercial nuclear ecosystem. There is an understanding
by engineering companies, utilities and
regulators that information/data transparency,
information lifecycle management
and better communication throughout the
ecosystem is necessary to build timely,
cost effective, safe and publicly accepted
nuclear power plants.

iPLM leverages capabilities in PLM,
EAM and Electronic Content Management
(ECM), combined with data management/
integration, information lifecycle management,
business process transformation
and integration with other nuclear functional
applications through a Service Oriented
Architecture (SOA)-based platform.
iPLM can also provide a foundation on
which to drive high-performance computing
into commercial nuclear operations,
since simulation requires consistent valid,
accessible data sets to be effective.

A hallmark of the iPLM vision is that it
is an integrated solution in which information
related to the nuclear power plant
flows seamlessly across a complete and
lengthy lifecycle. There are a number of
related systems with which an iPLM solution
must integrate. Therefore, adherence
to industry standard interoperability and
data models is necessary for a robust
iPLM solution. An example of an appropriate
data model standard is known as ISO
15926, which has recently been developed
to facilitate data interoperability.

Combining EAM and PLM

Incorporating EAM with PLM is an
example of one of the key integrations
created by an iPLM solution. It provides
several benefits. This includes the basis
for a cradle-to-grave data and work
process repository for all information
applicable to a new nuclear power plant.
A single version of the truth becomes
available early in the project design, and
remains applicable in the construction,
start-up and test, and turnover phases of
the project.

Second, with the advent of single-stem
licensing in many parts of the world (consider
the COLA, or combined Construction
and Operating License Application
in the U.S.), licensing risk is considerably
reduced by consistent maintenance of plant information. Demonstrating that the
plant being started up is the same plant
that was designed and licensed becomes
more straightforward and transparent.

Third, using an EAM system during construction,
and incrementally incorporating
the deep functionality necessary for EAM
in the plant operations, can facilitate and
shorten the plant transfer period from the
designers and constructors to the owners
and operators.

Finally, the time and cost to build a new
plant is significant, and delay in connecting
the plant to the grid for the safe generation
of megawatts can easily cost millions
of dollars. The formidable challenges
of nuclear construction, however, may be
offset by an SOA-based integrated information
system, replacing the traditional
unique and custom designed applications.

To help address these challenges, the
power generation industry ecosystem –
including utilities, engineering companies,
reactor and plant designers, and regulators
– can benefit by looking at methodologies
and results from other industries that
have continued to design, build, operate
and maintain highly complex systems
throughout the last 10 to 20 years.

Here we examine what the shipbuilding
industry has done, results it achieved, and
where it is going.

Experiences In Shipbuilding

The shipbuilding industry has many
similarities to the development of a new
nuclear or clean coal plant. Both are very
complex, long lifecycle assets (35 to 70
years) which require precise and accurate
design, construction, operation and
maintenance to both fulfill their missions
and operate safely over their lifetimes. In
addition, the respective timeframe and
costs of designing and building these
assets (five to 10 years and $5 billion to
$10 billion) create daunting challenges
from a project management and control
point of view.

An example of a successful implementation
of an iPLM-like solution in the shipbuilding
industry is a project completed
for Northrop Grumman’s development of
the next generation of U.S. surface combat
ships, a four-year, $2.9 billion effort.
This was a highly complex, collaborative
project completed by IBM and Dassault
Systemes to design and construct a new
fleet of ships with a keen focus on supporting
efficient production, operation
and maintenance of the platform over its
expected lifecycle.

A key consideration in designing, constructing
and operating modern ships
is increasing complexity of the assets,
including advanced electronics, sensors
and communications. These additional
systems and requirements greatly multiply
the number of simultaneous constraints
that must be managed within the
design, considered during construction
and maintained and managed during
operations. This not only includes more
system complexity, but also adds to the
importance of effective collaboration, as
many different companies and stakeholders
must be involved in the ship’s overall
design and construction.

An iPLM system helps to enforce standardization,
enabling lean manufacturing
processes and enhancing producibility of
various plant modules. For information
technology architecture to continue to be
relevant over the ship’s lifecycle, it is paramount
that it be based on open standards
and adhere to the most modern software
and hardware architectural philosophies.

To provide substantive value, both for
cost and schedule, tools such as real-time
interference checking, advanced visualization,
early-validation and constructability
analysis are key aspects of an iPLM solution
in the ship’s early design cycle. For
instance, early visualization allows feedback
from construction, operations and
maintenance back into the design process
before it’s too late to inexpensively make
changes.

There are also iPLM solution benefits
for the development of future projects.
Knowledge reuse is essential for decreasing
costs and schedules for future units,
and for continuous improvement of
already built units. iPLM provides for
more predictable design and construction
schedules and costs, reducing risk for the
development of new plants.

It is also necessary to consider cultural
change within the ecosystem to reap the
full iPLM solution benefits. iPLM represents
a fundamentally different way of
collaborating and closing the loop between
the various parts of the ship development
and operation lifecycle. As such, people
and processes must change to take advantage
of the tools and capabilities. Without
these changes, much of the benefits of an
iPLM solution could be lost.

Here are some sample cost and schedule
benefits from Navy shipbuilding implementations
of iPLM: reduction of documentation
errors, 15 percent; performance
to schedule increase, 25 percent; labor
cost reduction for engineering analysis,
50 percent; change process cost and time
reduction, 15 percent; and error correction
cost reduction during production, 15
percent.

Conclusions

An iPLM approach to design, construction,
operation and maintenance of a
commercial nuclear power plant – while
requiring reactor designers, engineering
companies, owner/operators, and regulators
to fundamentally change the way
they approach these projects – has been
shown in other industries to have substantial
benefits related to cost, schedule and
long-term operation and maintainability.

By developing and delivering to the customer
two plants: the physical plant and
the “digital plant,” substantial advantages
will accrue both during plant construction
and operation. Financial markets, shareholders,
regulators and the general public
will have more confidence in the development
and operation of these plants
through the predictability, performance to
schedule and cost and transparency that
an iPLM solution can help provide.

Future of Learning

The nuclear power industry is facing significant employee turnover, which may be exacerbated by the need to staff new nuclear units. To maintain a highly skilled workforce to safely operate U.S. nuclear plants, the industry must find ways to expedite training and qualification, enhance knowledge transfer to the next generation of workers, and develop leadership talent to achieve excellent organizational effectiveness.

Faced with these challenges, the Institute of Nuclear Power Operations (INPO), the organization charged with promoting safety and reliability across the 65 nuclear electric generation plants operating in the U.S., created a “Future of Learning” initiative. It identified ways the industry can maintain the same high standard of excellence and record of nuclear safety, while accelerating training development, individual competencies and plant training operations.

The nuclear power industry is facing the perfect storm. Like much of the industrialized world, it must address issues associated with an aging workforce since many of its skilled workers and nuclear engineering professionals are hitting retirement age, moving out of the industry and beginning other pursuits.

Second, as baby boomers transition out of the workforce, they will be replaced by an influx of Generation Y workers. Many workers in this “millenials” generation are not aware of the heritage driving the single-minded focus on safety. They are asking for new learning models, utilizing the technologies which are so much a part of their lives.

Third, even as this big crew change takes place, there is increasing demand for electricity. Many are turning to cleaner technologies – solar, wind, and nuclear – to close the gap. And there is resurgence in requests for building new nuclear plants, or adding new reactors at existing plants. This nuclear renaissance also requires training and preparation to take on the task of safely and reliably operating our nuclear power plants.

It is estimated there will be an influx of 25,000 new workers in the industry over the next five years, with an additional 7,000 new workers needed if just a third of the new plants are built. Given that incoming workers are more comfortable using technology for learning, and that delivery models that include a blend of classroom-based, instructor-led, and Web-based methods can be more effective and efficient, the industry is exploring new models and a new mix of training.

INPO was created by the nuclear industry in 1979 following the Three Mile Island accident. It has 350 full-time and loaned employees. As a nonprofit organization, it is chartered to promote the highest levels of safety and reliability – in essence, to promote excellence – in the operation of nuclear electric generating plants. All U.S. nuclear operating companies are members.

INPO’s responsibilities include evaluating member nuclear site operations, accrediting each site’s nuclear training programs and providing assistance and information exchange. It has established the National Academy for Nuclear Training, and an independent National Nuclear Accrediting Board. INPO sends teams to sites to evaluate their respective training activities, and each station is reviewed at least every four years by the accrediting board.

INPO has developed guidelines for 12 specifically accredited programs (six operations and six maintenance/technical), including accreditation objectives and criteria. It also offers courses and seminars on leadership, where more than 1,500 individuals participate annually, from supervisors to board members. Lastly, it operates NANTeL (National Academy for Nuclear Training e-Learning system) with 200 courses for general employee training for nuclear access. More than 80,000 nuclear workers and sub-contractors have completed training over the Web.

The Future of Learning

In 2008, to systematically address workforce and training challenges, the INPO Future of Learning team partnered with IBM Workforce and Learning Solutions to conduct more than 65 one-on-one interviews, with chief executive officers, chief nuclear officers, senior vice presidents, plant managers, plant training managers and other leaders in the extended industry community. The team also completed 46 interviews with plant staff during a series of visits to three nuclear power plants. Lastly, the team developed and distributed a survey that was sent to training managers at the 65 nuclear plants, achieving a 62 percent response rate.

These are statements the team heard:

  • “Need to standardize a lot of the training, deliver it remotely, preferably to a desktop, minimize the ‘You train in our classroom in our timeframe’ and have it delivered more autonomously so it’s likely more compatible with their lifestyles.”
  • “We’re extremely inefficient today in how we design/develop and administer training. We don’t want to carry inefficiencies that we have today into the future.”
  • “Right now, in all training programs, it’s a one-size-fits-all model that’s not customized to an individual’s background. Distance learning would enable this by allowing people to demonstrate knowledge and let some people move at a faster pace.”
  • “We need to have ‘real’ e-learning. We’ve been exposed to less than adequate, older models of e-learning. We need to move away from ‘page turners’ and onto quality content.”

Several recommendations were generated as a result of the study. The first focused on ways to improve INPO’s current training offerings by adding leadership development courses, ratcheting up the interactivity of the Web-based and e-learning offerings in NANTeL and developing a “nuclear citizenship” course for new workers in the industry.

Second, there were recommendations about better utilizing training resources across the industry by centralizing common training, beginning with instructor training and certification and generic fundamentals courses. It was estimated that 50 percent of the accredited training materials are common across the industry. To accomplish this objective, INPO is exploring an industry infrastructure that would enable centralized training material development, maintenance and delivery.

The last set of recommendations focused on methods for better coordination and efficiency of training, including developing processes for certifying vendor training programs, and providing a jump-start to common community college and university curriculum.

In 2009, INPO is piloting a series of Future of Learning initiatives which will help determine the feasibility, cost-effectiveness, readiness and acceptance of this first set of recommendations. It is starting to look more broadly at ways it can utilize learning technology to drive economies of scale, accelerative and prescriptive learning, and deliver value to the nuclear electric generation industry.

Where Do We Go From Here ?

Beyond the initial perfect storm is another set of factors driving the future of learning.

First, consider the need for speed. It has been said that “If you are not learning at the speed of change, you are falling behind.”

In his “25 Lessons from Jack Welch,” the former CEO of General Electric said, “The desire, and the ability, of an organization to continuously learn from any source, anywhere – and to rapidly convert this learning into action – is its ultimate competitive advantage.” Giving individuals, teams and organizations the tools and technologies to accelerate and broaden their learning is an important part of the future of learning.

Second, consider the information explosion – the sheer volume of information available, the convenience of information access (due, in large part, to continuing developments in technology) and the diversity of information available. When there is too much information to digest, a person is unable to locate and make use of the information that one needs. When one is unable to process the sheer volume of information, overload occurs. The future of learning should enable the learner to sort through information and find knowledge.

Third, consider new developments in technology. Generations X and Y are considered “digital natives.” They expect that the most current technologies are available to them – including social networking, blogging, wikis, immersive learning and gaming – and to not have them is unthinkable.

Impact of New Technology

Philosophy of training has morphed from “just-in-case” (teach them everything and hope they will remember when they need it), to “just-in-time” (provide access to training just before the point of need), to “just-for-me.” With respect to the latter, learning is presented in a preferred media, with a learning path customized to reflect the student’s preferred learning style, and personalized to address the current and desired level of expertise within any given time constraint.

Imagine a scenario in which a maintenance technician at a nuclear plant has to replace a specialized valve – something she either hasn’t done for awhile, or hasn’t replaced before. In a Web 2.0 world, she should be able to run a query on her iPhone or similar handheld device and pull up the maintenance of that particular valve, access the maintenance records, view a video of the approved replacement procedure, or access an expert who could coach her through the process.

Learning Devices

What needs to be in place to enable this vision of the future of learning? First, workers will need a device that can access the information by connecting over a secure wireless network inside the plant. Second, the learning has to be available in small chunks – learning nuggets or learning assets. Third, the learning needs to be assembled along the dimensions of learning style, desired and target level of expertise, time available and media type, among other factors. Finally, experts need to be identified, tagged to particular tasks and activities, and made accessible.

Fortunately, some of the same learning technology tools that will enable centralized maintenance and accelerated development will also facilitate personalized learning. When training is organized at a more granular level – the learning asset level – not only can it be leveraged over a variety of courses and courseware, it can also be re-assembled and ported to a variety of outputs such as lesson books, e-learning and m-learning (mobile-learning).

The example above pointed out another shift in our thinking about learning. Traditionally, our paradigm has been that learning occurs in a classroom, and when it occurs, it has taken the form of a course. In the example above, the learning takes place anywhere and anytime, moving from the formal classroom environment to an informal environment. Of course, just because learning is “informal” does not mean it is accidental, or that it occurs without preparation.

Some estimates claim 10 percent of our learning is achieved through formal channels, 20 percent from coaching, and 70 percent through informal means. Peter Henschel, former director of the Institute for Research on Learning, raised an important question: If nearly three-quarters of learning in corporations is informal, can we afford to leave it to chance?

There are still several open issues regarding informal learning:

  • How do we evaluate the impact/effectiveness of informal learning? (Informal learning, but formal demonstration of competency/proficiency);
  • How do we record one’s participation and skill-level progression in informal learning? (Information learning, but formal recording of learning completion);
  • Who will create and maintain informal learning assets? (Informal learning, but formal maintenance and quality assurance of the learning content); and
  • When does informal learning need a formal owner (in a full- or part-time role)? (Informal learning, but will need formal policies to help drive and manage).
    • In the nuclear industry, accurate and up-to-date documentation is a necessity. As the nuclear industry moves toward more effective use of informal channels of learning, it will need to address these issues.

      Immersive Learning (Or Virtual Worlds)

      The final frontier for the future of learning is expansion into virtual worlds, also known as immersive learning. Although Second Life (SL) is the best known virtual world, there are also emerging competitors, including Active Worlds, Forterra (OLIVE), Qwag and Unisfair.

      Created in 2003 by Linden Lab of San Francisco, SL is a three-dimensional, virtual world that allows users to buy “property,” create objects and buildings and interact with other users. Unlike a game with rules and goals, SL offers an open-ended platform where users can shape their own environment. In this world, avatars do many of the same things real people do: work, shop, go to school, socialize with friends and attend rock concerts.

      From a pragmatic perspective, working in an immersive learning environment such as a virtual world provides several benefits that make it an effective alternative to real life:

      • Movement in 3-D space. A virtual world could be useful in any learning situation involving movement, danger, tactics, or quick physical decisions, such as emergency response.
      • Engendering Empathy. Participants experience scenarios from another person’s perspective. For example, the Future of Learning team is exploring ways to re-create the control room experience during the Three-Mile Island incident, to provide a cathartic experience for the next generation workforce so they can better appreciate the importance of safety and human performance factors.
      • Rapid Prototyping and Co-Design. A virtual world is an inexpensive environment for quickly mocking up prototypes of tools or equipment.
      • Role Playing. By conducting role plays in realistic settings, instructors and learners can take on various avatars and play those characters.
      • Alternate Means of Online Interaction. Although users would likely not choose a virtual world as their primary online communication tool, it provides an alternative means of indicating presence and allowing interaction. Users can have conversations, share note cards, and give presentations. In some cases, SL might be ideal as a remote classroom or meeting place to engage across geographies and utility boundaries.

      Robert Amme, a physicist at the University of Denver, has another laboratory in SL. Funded by a grant from the Nuclear Regulatory Commission, his team is building a virtual nuclear reactor to help train the next generation of environmental engineers on how to deal with nuclear waste (see Figure 1). The INPO Future of Learning team is exploring ways to leverage this type of learning asset as part of the nuclear citizenship initiative.

      There is no doubt that nuclear power generation is once again on an upswing, but critical to its revival and longevity will be the manner in which we prepare the current and next generation of workers to become outstanding stewards of a safe, effective, clean-energy future.

Online Transient Stability Controls

For the last few decades the growth of the world’s population and its corresponding increased demand for electrical energy has created a huge increase in the supply of electrical power. However, for logistical, environmental, political and social reasons, this power generation is rarely near its consumers, necessitating the growth of very large and complex transmission networks. The addition of variable wind energy in remote locations is only exacerbating the situation. In addition the transmission grid capacity has not kept pace with either generation capacity or consumption while at the same time being extremely vulnerable to potential large-scale outages due to outdated operational capabilities.

For example, today if a fault is detected in the transmission system, the only course is to shed both load and generation. This is often done without consideration for real-time consequences or alternative analysis. If not done rapidly, it can result in a widespread, cascading power system blackout. While it is necessary to remove factors that might lead to a large-scale blackout, restriction of power flow or other countermeasures against such a failure, may only achieve this by sacrificing economical operation. Thus, the flexible and economical operation of an electric power system may often be in conflict with the requirement for improved supply reliability and system stability.

Limits of Off-line Approaches

One approach to solving this problem involves stabilization systems that have been deployed for preventing generator step-out by controlling the generator acceleration through power shedding, in which some of the generators are shut off at the time of a power system fault.

In 1975, an off-line special protection system (SPS) for power flow monitoring was introduced to achieve the transient stability of the trunk power system and power source system after a network expansion in Japan. This system was initially of the type for which settings were determined in advance by manual calculations using transient stability simulation programs assuming many contingencies on typical power flow patterns.

This type of off-line solution has the following problems:

  • Planning, design, programming, implementation and operational tasks are laborious. A vast number of simulations are required to determine the setting tables and required countermeasures, such as generator shedding, whenever transmission lines are constructed;
  • It is not well suited to variable generations sources such as wind or photovoltaic farms;
  • It is not suitable for reuse and replication, incurring high maintenance costs; and
  • Excessive travel time and related labor expense is required for the engineer and field staff to maintain the units at numerous sites.

By contrast, an online TSC solution employs various sensors that are placed throughout the transmission network, substations and generation sources. These sensors are connected to regional computer systems via high speed communications to monitor, detect and execute contingencies on transients that may affect system stability. These systems in turn are connected to centralized computers which monitor the network of distributed computers, building and distributing contingencies based on historical and recent information. If a transient event occurs, the entire ecosystem responds within 150 ms to detect, analyze, determine the correct course of action, and execute the appropriate set of contingencies in order to preserve the stability of the power network.

In recent years, high performance computational servers have been developed and their costs have been reduced enough to use many of them in parallel and/or in a distributed computing architecture. This results in a system that not only provides a benefit in greatly increasing the availability and reliability of the power system, but in fact, can best optimize the throughput of the grid. Thus not only has system reliability improved or remained stable, but the network efficiency itself has increased without a significant investment in new transmission lines. This has resulted in more throughput within the transmission grid, without building new transmission lines.

Solution and Elements

In 1995, for the first time ever, an online TSC system was developed and introduced in Japan. This solution provided a system stabilization procedure required by the construction of the new 500kV trunk networks of Chubu Electric Power Co. (CEPCO) [1-4]. Figure 1 shows the configuration of the online TSC system. This system introduced a pre-processing online calculation in the TSC-P (parent) besides a fast, post-event control executed by the combination of TSC-C (child) and TSC-T (terminal). This online TSC system can be considered an example of a self-healing solution of a smart grid. As a result of periodic simulations using the online data in TSC-P, operators of energy management systems/supervisory control and data acquisition (EMS/ SCADA) are constantly made aware of stability margins for current power system situations.

Using the same online data, periodic calculations performed in the TSC-P can reflect power network situations and the proper countermeasures to mitigate transient system events. The TSC-P simulates transient stability dynamics on about 100 contingencies of the power systems for 500 kV, 275 kV and 154 kV transmission networks. The setting tables for required countermeasures, such as generator shedding, are periodically sent to the TSC-Cs located at main substations. The TSC-Ts located at generation stations, shed the generators when the actual fault occurs. The actual generator shedding by the combination of TSC-Cs and TSC-Ts is completed within 150 ms after the fault to maintain the system’s stability.

Customer Experiences and Benefits

Figure 2 shows the locations of online TSC systems and their coverage areas in CEPCO’s power network. There are two online TSC systems currently operating; namely, the trunk power TSC system, to protect the 500 kV trunk power system introduced in 1995, and the power source TSC system to protect the 154 kV to 275 kV power source systems around the generation stations.

Actual performance data have shown some significant benefits:

  • Total transfer capability (TTC) is improved through elimination of transient stability limitations. TTC is decided by the minimum value of limitations given by not only thermal limit of transmission lines but transient stability, frequency stability, and voltage stability. Transient stability limits often determines the TTC in the case of long transmission lines from generation plants. CEPCO was able to introduce high-efficiency, combined-cycle power plants without constructing new transmission lines. TTC was increased from 1,500 MW to 3,500 MW by introducing the on-line TSC solution.
  • Power shedding is optimized. Not only is the power flow of the transmission line on which a fault occurs assessed, but the effects of other power flows surrounding the fault point are included in the analysis to decide the precise stability limit. The online TSC system can also reflect the constraints and priorities of each generator to be shed. To ensure a smooth restoration after the fault, restart time of shut off generators, for instance, can also be included.
  • When constructing new transmission lines, numerous off-line studies assuming various power flow patterns are required to support off-line SPS. After introduction of the online TSC system, new construction of transmission lines was more efficient by changing the equipment database for the simulation in the TSC-P.

In 2003, this CEPCO system received the 44th Annual Edison Award from the Edison Electric Institute (EEI), recognizing CEPCO’s achievement with the world’s first application of this type of system, and the contribution of the system to efficient power management.

Today, benefits continue to accrue. A new TSC-P, which adopts the latest high-performance computation servers, is now under construction for operation in 2009 [3]. The new system will shorten the calculation interval from every five minutes to every 30 seconds in order to reflect power system situations as precisely as possible. This interval was determined by the analysis of various stability situations recorded by the current TSC-P over more than 10 years of operation.

Additionally, although the current TSC-P uses the same online data as used by EMS/ SCADA, it can control emergency actions against small signal instability by receiving phasor measurement unit (PMU) data to detect divergences of phasor angles and voltages among the main substations.

Summary

The online TSC system is expected to realize optimum stabilization control of recent complicated power system conditions by obtaining power system information online and carrying out stability calculations at specific intervals. The online TSC will thus help utilities achieve better returns on investment in new or renovated transmission lines, reducing outage time and enabling a more efficient smart grid.

References

  1. Ota, Kitayama, Ito, Fukushima, Omata, Morita and Y. Kokai, “Development of Transient Stability Control System (TSC System) Based on Online Stability Calculation”, IEEE Trans. on Power System, Vol. 11, No. 3, pp. 1463-1472, August 1996.
  2. Koaizawa, Nakane, Omata and Y. Kokai, “Acutual Operating Experience of Online Transient Stability Control System (TSC System), IEEE PES Winter Meeting, 2000, Vol. 1, pp 84-89.
  3. Takeuchi, Niwa, Nakane and T. Miura
    “Performance Evaluation of the Online Transient Stability Control System (Online TSC System)”, IEEE PES General Meeting , June 2006.
  4. Takeuchi, Sato, Nishiiri, Kajihara, Kokai and M. Yatsu, “Development of New Technologies and Functions for the Online TSC System”, IEEE PES General Meeting , June 2006.

The Power of Prediction: Improving the Odds of a Nuclear Renaissance

After 30 years of disfavor in the United States, the nuclear power industry is poised for resurgence. With the passage of the Energy Policy Act of 2005, the specter of over $100 per barrel oil prices and the public recognition that global warming is real, nuclear power is now considered one of the most practical ways to clean up the power grid and help the United States reduce its dependence on foreign oil. The industry has responded with a resolve to build a new fleet of nuclear plants in anticipation of what has been referred to as a nuclear renaissance.

The nuclear power industry is characterized by a remarkable level of physics and mechanical science. Yet, given the confluence of a number of problematic issues – an aging workforce, the shortage of skilled trades, the limited availability of equipment and parts, and a history of late, over-budget projects – questions arise about whether the level of management science the industry plans to use is sufficient to navigate the challenges ahead.

According to data from the Energy Information Administration (EIA), nuclear power comprises 20 percent of the U.S. capacity, producing approximately 106 gigawatts (GW), with 66 plants that house 104 reactor units. To date, more than 30 new reactors have been proposed, which will produce a net increase of approximately 19 GW of nuclear capacity through 2030. Considering the growth of energy demand, this increased capacity will barely keep pace with increasing base load requirements.

According to Assistant Secretary for Nuclear Energy Dennis Spurgeon, we will need approximately 45 new reactors online by 2030 just to maintain 20 percent share of U.S. electricity generation nuclear power already holds.

Meanwhile, Morgan Stanley vice chairman Jeffrey Holzschuh is very positive about the next generation of nuclear power but warns that the industry’s future is ultimately a question of economics. “Given the history, the markets will be cautious,” he says.

As shown in Figures 1-3, nuclear power is cost competitive with other forms of generation, but its upfront capital costs are comparatively high. Historically, long construction periods have led to serious cost volatility. The viability of the nuclear power industry ultimately depends on its ability to demonstrate that plants can be built economically and reliably. Holzschuh predicts, “The first few projects will be under a lot of public scrutiny, but if they are approved, they will get funded. The next generation of nuclear power will likely be three to five plants or 30, nothing in between.”

Due to its cohesive identity, the nuclear industry is viewed by the public and investors as a single entity, making the fate of industry operators – for better or for worse – a shared destiny. For that reason, it’s widely believed that if these first projects suffer the same sorts of significant cost over-runs and delays experienced in the past, the projected renaissance for the industry will quickly revert to a return to the dark ages.

THE PLAYERS

Utility companies, regulatory authorities, reactor manufacturers, design and construction vendors, financiers and advocacy groups all have critical roles to play in creating a viable future for the nuclear power industry – one that will begin with the successful completion of the first few plants in the United States. By all accounts, an impressive foundation has been laid, beginning with an array of government incentives (as loan guarantees and tax credits) and simplified regulation to help jump-start the industry.

Under the Energy Policy Act of 2005, the U.S. Department of Energy has the authority to issue $18.5 billion in loan guarantees for new nuclear plants and $2 billion for uranium enrichment projects. In addition, there’s standby support for indemnification against Nuclear Regulatory Commission (NRC) and litigation-oriented delays for the first six advanced nuclear reactors. The Treasury Department has issued guidelines for an allocation and approval process for production tax credits for advanced nuclear: 1.8 cents per kilowatt-hour production tax credit for the first eight years of operation with the final rules to be issued in fiscal year 2008.

The 20-year renewal of the Price- Andersen Act in 2005 and anticipated future restrictions on carbon emissions further improve the comparative attractiveness of nuclear power. To be eligible for the 2005 production tax credits, a license application must be tendered to the NRC by the end of 2008 with construction beginning before 2014 and the plant placed in service before 2021.

The NRC has formulated an Office of New Reactors (NRO), and David Matthews, director of the Division of New Reactor Licensing, led the development of the latest revision of a new licensing process that’s designed to be more predictable by encouraging the standardization of plant designs, resolving safety and environmental issues and providing for public participation before construction begins. With a fully staffed workforce and a commitment to “enable the safe, secure and environmentally responsible use of nuclear power in meeting the nation’s future energy needs,” Matthews is determined to ensure that the NRC is not a risk factor that contributes to the uncertainty of projects but rather an organizing force that will create predictability. Matthews declares, “This isn’t your father’s NRC.”

This simplified licensing process consists of the following elements:

  • An early site permit (ESP) for locations of potential facilities.
  • Design certification (DC) for the reactor design to be used.
  • Combined operating license (COL) for the certified reactor as designed to be located on the site. The COL contains the inspections, tests, analyses and acceptance criteria (ITAAC) to demonstrate that the plant was built to the approved specifications.

According to Matthews, the best-case scenario for the time period between when a COL is docketed to the time the license process is complete is 33 months, with an additional 12 months for public hearings. When asked if anything could be done to speed this process, Matthews reported that every delay he’s seen thus far has been attributable to a cause beyond the NRC’s control. Most often, it’s the applicant that’s having a hard time meeting the schedule. Recently, approved schedules are several months longer than the best-case estimate.

The manufacturers of nuclear reactors have stepped up to the plate to achieve standard design certification for their nuclear reactors; four are approved, and three are in progress.

Utility companies are taking innovative approaches to support the NRC’s standardization principles, which directly impact costs. (Current conventional wisdom puts the price of a new reactor at between $4 billion and $5.5 billion, with some estimates of fully loaded costs as high as $7 billion.) Consortiums have been formed to support cross-company standardization around a particular reactor design. NuStart and UniStar are multi-company consortiums collaborating on the development of their COLs.

Leader of PPL Corp.’s nuclear power strategy Bryce Shriver – who recently announced PPL had selected UniStar to build its next nuclear facility – is impressed with the level of standardization UniStar is employing for its plants. From the specifics of the reactor design to the carpet color, UniStar – with four plants on the drawing board – intends to make each plant as identical as possible.

Reactor designers and construction companies are adding to the standardization with turnkey approaches, formulating new construction methods that include modular techniques; sophisticated scheduling and configuration management software; automated data; project management and document control; and designs that are substantially complete before construction begins. Contractors are taking seriously the lessons learned from plants built outside the United States, and they hope to leverage what they have learned in the first few U.S. projects.

The stewards of the existing nuclear fleet also see themselves as part of the future energy solution. They know that continued safe, high-performance operation of current plants is key to maintaining public and state regulator confidence. Most of the scheduled plants are to be co-located with existing nuclear facilities.

Financing nuclear plant construction involves equity investors, utility boards of directors, debt financiers and (ultimately) the ratepayers represented by state regulatory commissions. Despite the size of these deals, the financial community has indicated that debt financing for new nuclear construction will be available. The bigger issue lies with the investors. The more equity-oriented the risk (principally borne by utilities and ratepayers), the more caution there is about the structure of these deals. The debt financiers are relying on the utilities and the consortiums to do the necessary due diligence and put up the equity. There’s no doubt that the federal loan guarantees and subsidies are an absolute necessity, but this form of support is largely driven by the perceived risk of the first projects. Once the capability to build plants in a predictable way (in terms of time, cost, output and so on) has been demonstrated, market forces are expected to be very efficient at allocating capital to these kinds
of projects.

The final key to the realization of a nuclear renaissance is the public. Americans have become increasingly concerned about fossil fuels, carbon emissions and the nation’s dependence on foreign oil. The surge in oil prices has focused attention on energy costs and national security. Coal-based energy production is seen as an environmental issue. Although the United States has plenty of access to coal, dealing with carbon emissions using clean coal technology involves sequestering it and pumping it underground. PPL chairman Jim Miller describes the next challenge for clean coal as NUMBY – the “Not under my back yard” attitude the public is likely to adopt if forced to consider carbon pumped under their communities. Alternative energy sources such as wind, solar and geothermal enjoy public support, but they are not yet scalable for the challenge of cleaning up the grid. In general, the public wants clean, safe, reliable, inexpensive power.

THE RISKS

Will nuclear fill that bill and look attractive compared with the alternatives? Although progress has been made and the stage is set, critical issues remain, and they could become problematic. While the industry clearly sees and is actively managing some of these issues, there are others the industry sees but is not as certain about how to manage – and still others that are so much a part of the fabric of the industry that they go unrecognized. Any one of these issues could slow progress; the fact that there are several that could hit simultaneously multiplies the risk exponentially.

The three widely accepted risk factors for the next phase of nuclear power development are the variability of the cost of uranium, the availability of quality equipment for construction and the availability of well-trained labor. Not surprising for an industry that’s been relatively sleepy for several decades, the pipeline for production resources is weak – a problem compounded by the well-understood coming wave of retirements in the utility workforce and the general shortage of skilled trades needed to work on infrastructure projects. Combine these constraints with a surge in worldwide demand for power plants, and it’s easy to understand why the industry is actively pursuing strategies to secure materials and train labor.

The reactor designers, manufacturers and construction companies that would execute these projects display great confidence. They’re keen on the “turnkey solution” as a way to reduce the risk of multiple vendors pointing fingers when things go wrong. Yet these are the same firms that have been openly criticized for change orders and cost overruns. Christopher Crane, chief operating officer of the utility Exelon Corp., warned contractors in a recent industry meeting that the utilities would “not take all the risk this time around.” When faced with complicated infrastructure development in the past, vendors have often pointed to their expertise with complex projects. Is the development of more sophisticated scheduling and configuration management capability, along with the assignment of vendor accountability, enough to handle the complexity issue? The industry is aware of this limitation but does not as yet have strong management techniques for handling it effectively.

Early indications from regulators are that the COLs submitted to date are not meeting the NRC’s guidance and expectations in all regards, possibly a result of the applicants’ rush to make the 2008 year-end deadline for the incentives set forth in the Energy Policy Act. This could extend the licensing process and strain the resources of the NRC. In addition, the requirements of the NRC principally deal with public safety and environmental concerns. There are myriad other design requirements entailed in making a plant operate profitably.

The bigger risk is that the core strength of the industry – its ability to make significant incremental improvements – could also serve as the seed of its failure as it faces this next challenge. Investors, state regulators and the public are not likely to excuse serious cost overruns and time delays as they may have in the past. Utility executives are clear that nuclear is good to the extent that it’s economical. When asked what single concern they find most troubling, they often reply, “That we don’t know what we don’t know.”

What we do know is that there are no methods currently in place for beginning successful development of this next generation of nuclear power plants, and that the industry’s core management skill set may not be sufficient to build a process that differs from a “learn as you go” approach. Thus, it’s critical that the first few plants succeed – not just for their investors but for the entire industry.

THE OPPORTUNITY – KNOWING WHAT YOU DON’T KNOW

The vendors supporting the nuclear power industry represent some of the most prestigious engineering and equipment design and manufacturing firms in the world: Bechtel, Fluor, GE, Westinghouse, Areva and Hitachi. Despite this, the industry is not known for having a strong foundation in managing innovation. In a world that possesses complex physical capital and myriad intangible human assets, political forces and public opinion as well as technology are all required to get a plant to the point of producing power. Thus, more advanced management science could represent the missing piece of the puzzle for the nuclear power industry.

An advanced, decision-making framework can help utilities manage unpredictable events, increasing their ability to handle the planning and anticipated disruptions that often beset long, complex projects. By using advanced management science, the nuclear industry can take what it knows and create a learning environment to fi nd out more about what it doesn’t know, improving its odds for success.

Microsoft Helps Utilities Use IT to Create Winning Relationships

The utilities industry worldwide is experiencing growing energy demand in a world with shifting fuel availability, increasing costs, a shrinking workforce and mounting global environmental pressures. Rate case filings and government regulations, especially those regarding environmental health and safety, require utilities to streamline reporting and operate safely enterprise-wide. At the same time, increasing competition and costs drive the need for service reliability and better customer service. Each issue causes utilities to depend more and more on information technology (IT).

The Microsoft Utility team works with industry partners to create and deploy industry-specific solutions that help utilities transform challenges into opportunities and empower utilities workers to thrive in today’s market-driven environment. Solutions are based on the world’s most cost-effective, functionally rich, and secure IT platform. The Microsoft platform is interoperable with a wide variety of systems and proven to improve people’s abilities to access information and work with others across boundaries. Together, they help utilities optimize operations in each line of business.

Customer care. Whether a utility needs to modernize a call center, add customer self-service or respond to new business requirements such as green power, Microsoft and its partners provide solutions for turning the customer experience into a powerful competitive advantage with increased cost efficiencies, enhanced customer service and improved financial performance.

Transmission and distribution. Growing energy demand makes it critical to effectively address safe, reliable and efficient power delivery worldwide. To help utilities meet these needs, Microsoft and its partners offer EMS, DMS and SCADA systems; mobile workforce management solutions; project intelligence; geographic information systems; smart metering/grid; and work/asset/document management tools that streamline business processes and offer connectivity across the enterprise and beyond.

Generation. Microsoft and its partners provide utilities with a view across and into their generation operations that enables them to make better decisions to improve cycle times, output and overall effectiveness while reducing the carbon footprint. With advanced software solutions from Microsoft and its partners, utilities can monitor equipment to catch early failure warnings, measure fleets’ economic performance and reduce operational and environment risk.

Energy trading and risk management. Market conditions require utilities to optimize energy supply performance. Microsoft and its partners’ enterprise risk management and trading solutions help utilities feed the relentless energy demands in a resource-constrained world.

Regulatory compliance. Microsoft and its partners offer solutions to address the compliance requirements of the European Union; Federal Energy Regulatory Commission; North American Reliability Council; Sarbanes-Oxley Act of 2000; Environmental, Health and Safety; and other regional jurisdiction regulations and rate case issues. With solutions from Microsoft partners, utilities have a proactive approach to compliance, the most effective way to manage operational risk across the enterprise.

Enterprise. To optimize their businesses, utility executives need real-time visibility across the enterprise. Microsoft and its partners provide integrated e-business solutions that help utilities optimize their interactions with customers, vendors and partners. These enterprise applications address business intelligence and reporting, customer relationship management, collaborative workspaces, human resources and financial management.

The GridWise Olympic Peninsula Project

The Olympic Peninsula Project consisted of a field demonstration and test of advanced price signal-based control of distributed energy resources (DERs). Sponsored by the U.S. Department of Energy (DOE) and led by the Pacific Northwest National Laboratory, the project was part of the Pacific Northwest Grid- Wise Testbed Demonstration.

Other participating organizations included the Bonneville Power Administration, Public Utility District (PUD) #1 of Clallam County, the City of Port Angeles, Portland General Electric, IBM’s T.J. Watson Research Center, Whirlpool and Invensys Controls. The main objective of the project was to convert normally passive loads and idle distributed generation into actively participating resources optimally coordinated in near real time to reduce stress on the local distribution system.

Planning began in late 2004, and the bulk of the development work took place in 2005. By late 2005, equipment installations had begun, and by spring 2006, the experiment was fully operational, remaining so for one full year.

The motivating theme of the project was based on the GridWise concept that inserting intelligence into electric grid components at every point in the supply chain – from generation through end-use – will significantly improve both the electrical and economic efficiency of the power system. In this case, information technology and communications were used to create a real-time energy market system that could control demand response automation and distributed generation dispatch. Optimal use of the DER assets was achieved through the market, which was designed to manage the flow of power through a constrained distribution feeder circuit.

The project also illustrated the value of interoperability in several ways, as defined by the DOE’s GridWise Architecture Council (GWAC). First, a highly heterogeneous set of energy assets, associated automation controls and business processes was composed into a single solution integrating a purely economic or business function (the market-clearing system) with purely physical or operational functions (thermostatic control of space heating and water heating). This demonstrated interoperability at the technical and informational levels of the GWAC Interoperability Framework (www.gridwiseac.org/about/publications.aspx), providing an ideal example of a cyber-physical-business system. In addition, it represents an important class of solutions that will emerge as part of the transition to smart grids.

Second, the objectives of the various asset owners participating in the market were continuously balanced to maintain the optimal solution at any point in time. This included the residential demand response customers; the commercial and municipal entities with both demand response and distributed generation; and the utilities, which demonstrated interoperability at the organizational level of the framework.

PROJECT RESOURCES

The following energy assets were configured to respond to market price signals:

  • Residential demand response for electric space and water heating in 112 single-family homes using gateways connected by DSL or cable modem to provide two-way communication. The residential demand response system allowed the current market price of electricity to be presented to customers. Consumers could also configure their demand response automation preferences. The residential consumers were evenly divided among three contract types (fixed, time of use and real time) and a fourth control group. All electricity consumption was metered, but only the loads in price-responsive homes were controlled by the project (approximately 75 KW).
  • Two distributed generation units (175 KW and 600 KW) at a commercial site served the facility’s load when the feeder supply was not sufficient. These units were not connected in parallel to the grid, so they were bid into the market as a demand response asset equal to the total load of the facility (approximately 170 KW). When the bid was satisfied, the facility disconnected from the grid and shifted its load to the distributed generation units.
  • One distributed microturbine (30 KW) that was connected in parallel to the grid. This unit was bid into the market as a generation asset based on the actual fixed and variable expenses of running the unit.
  • Five 40-horsepower (HP) water pumps distributed between two municipal water-pumping stations (approximately 150 KW of total nameplate load). The demand response load from these pumps was incrementally bid into the market based on the water level in the pumped storage reservoir, effectively converting the top few feet of the reservoir capacity into a demand response asset on the electrical grid.

Monitoring was performed for all of these resources, and in cases of price-responsive contracts, automated control of demand response was also provided. All consumers who employed automated control were able to temporarily disable or override project control of their loads or generation units. In the residential realtime price demand response homes, consumers were given a simple configuration choice for their space heating and water heating that involved selecting an ideal set point and a degree of trade-off between comfort and price responsiveness.

For real-time price contracts, the space heater demand response involved automated bidding into the market by the space heating system. Since the programmable thermostats deployed in the project didn’t support real-time market bidding, IBM Research implemented virtual thermostats in software using an event-based distributed programming prototype called Internet- Scale Control Systems (iCS). The iCS prototype is designed to support distributed control applications that span virtually any underlying device or business process through the definition of software sensor, actuator and control objects connected by an asynchronous event programming model that can be deployed on a wide range of underlying communication and runtime environments. For this project, virtual thermostats were defined that conceptually wrapped the real thermostats and incorporated all of their functionality while at the same time providing the additional functionality needed to implement the real-time bidding. These virtual thermostats received
the actual temperature of the house as well as information about the real-time market average price and price distribution and the consumer’s preferences for set point and comfort/economy trade-off setting. This allowed the virtual thermostats to calculate the appropriate bid every five minutes based on the changing temperature and market price of energy.

The real-time market in the project was implemented as a shadow market – that is, rather than change the actual utility billing structure, the project implemented a parallel billing system and a real-time market. Consumers still received their normal utility bill each month, but in addition they received an online bill from the shadow market. This additional bill was paid from a debit account that used funds seeded by the project based on historical energy consumption information for the consumer.

The objective was to provide an economic incentive to consumers to be more price responsive. This was accomplished by allowing the consumers to keep the remaining balance in the debit account at the end of each quarter. Those consumers who were most responsive were estimated to receive about $150 at the end of the quarter.

The market in the project cleared every five minutes, having received demand response bids, distributed generation bids and a base supply bid based on the supply capacity and wholesale price of energy in the Mid-Columbia system operated by Bonneville Power Administration. (This was accomplished through a Dow Jones feed of the Mid-Columbia price and other information sources for capacity.) The market operation required project assets to submit bids every five minutes into the market, and then respond to the cleared price at the end of the five-minute market cycle. In the case of residential space heating in real-time price contract homes, the virtual thermostats adjusted the temperature set point every five minutes; however, in most cases the adjustment was negligible (for example, one-tenth of a degree) if the price was stable.

KEY FINDINGS

Distribution constraint management. As one of the primary objectives of the experiment, distribution constraint management was successfully accomplished. The distribution feeder-imported capacity was managed through demand response automation to a cap of 750 KW for all but one five-minute market cycle during the project year. In addition, distributed generation was dispatched as needed during the project, up to a peak of about 350 KW.

During one period of about 40 hours that took place from Oct. 30, 2006, to Nov. 1, 2006, the system successfully constrained the feeder import capacity at its limit and dispatched distributed generation several times, as shown in Figure 1. In this figure, actual demand under real-time price control is shown in red, while the blue line depicts what demand would have been without real-time price control. It should be noted that the red demand line steps up and down above the feeder capacity line several times during the event – this is the result of distributed generation units being dispatched and removed as their bid prices are met or not.

Market-based control demonstrated. The project controlled both heating and cooling loads, which showed a surprisingly significant shift in energy consumption. Space conditioning loads in real-time price contract homes demonstrated a significant shift to early morning hours – a shift that occurred during both constrained and unconstrained feeder conditions but was more pronounced during constrained periods. This is similar to what one would expect in preheating or precooling systems, but neither the real nor the virtual thermostats in the project had any explicit prediction capability. The analysis showed that the diurnal shape of the price curve itself caused the effect.

Peak load reduced. The project’s realtime price control system both deferred and shifted peak load very effectively. Unlike the time-of-use system, the realtime price control system operated at a fine level of precision, responding only when constraints were present and resulting in a precise and proportionally appropriate level of response. The time-of-use system, on the other hand, was much coarser in its response and responded regardless of conditions on the grid, since it was only responding to preconfiured time schedules or manually initiated critical peak price signals.

Internet-based control demonstrated. Bids and control of the distributed energy resources in the project were implemented over Internet connections. As an example, the residential thermostats modified their operation through a combination of local and central control communicated as asynchronous events over the Internet. Even in situations of intermittent communication failure, resources typically performed well in default mode until communications could be re-established. This example of the resilience of a well-designed, loosely coupled distributed control application schema is an important aspect of what the project demonstrated.

Distributed generation served as a valuable resource. The project was highly effective in using the distributed generation units, dispatching them many times over the duration of the experiment. Since the diesel generators were restricted by environmental licensing regulations to operate no more than 100 hours per year, the bid calculation factored in a sliding scale price premium such that bids would become higher as the cumulative runtime for the generators increased toward 100 hours.

CONCLUSION

The Olympic Peninsula Project was unique in many ways. It clearly demonstrated the value of the GridWise concepts of leveraging information technology and incorporating market constructs to manage distributed energy resources. Local marginal price signals as implemented through the market clearing process, and the overall event-based software integration framework successfully managed the bidding and dispatch of loads and balanced the issues of wholesale costs, distribution congestion and customer needs in a very natural fashion.

The final report (as well as background material) on the project is available at www.gridwise.pnl.gov. The report expands on the remarks in this article and provides detailed coverage of a number of important assertions supported by the project, including:

  • Market-based control was shown to be a viable and effective tool for managing price-based responses from single-family premises.
  • Peak load reduction was successfully accomplished.
  • Automation was extremely important in obtaining consistent responses from both supply and demand resources.
  • The project demonstrated that demand response programs could be designed by establishing debit account incentives without changing the actual energy prices offered by energy providers.

Although technological challenges were identified and noted, the project found no fundamental obstacles to implementing similar systems at a much larger scale. Thus, it’s hoped that an opportunity to do so will present itself at some point in the near future.