The Hydrogen Economy

The expression “hydrogen economy” is appearing more and more frequently in
the headlines and on the bookstands. As recently as February 2005 the governor
of Florida unveiled his Hydrogen Energy Technologies Act and broke ground on
the first hydrogen energy station in Florida. The previous year he announced
a statewide initiative to grow the hydrogen industry, and he’s certainly not
ahead of the curve. Iceland already has plans to be the world’s first hydrogen
economy. Hawaii is beginning to convert its solar and geothermal resources into
hydrogen for energy. California has its own hydrogen superhighway, and several
years ago the governor of New York participated in a ribbon-cutting ceremony
opening a new manufacturing facility in New York that would build hydrogen fuel
cell systems for both industry and for individual homes.

To decide what’s going on and whether it’s for real, let’s look at recent newspapers
– and the gas pump. The Wall Street Journal (Feb. 28, 2005) carried a small
story titled, “Gas Prices Rise Despite Greater Supply.” The article states that
many analysts are predicting prices as high as $2.50 per gallon for gasoline
during the coming year, and the price is expected to continue to rise.

Production of energy within the United States, especially from petroleum, has
not kept pace with U.S. usage, driving up imports. The United States currently
imports 55 percent of its oil, a number that is expected to increase to 68 percent
by 2025.[1] Demand for electricity is expected to increase by 45 percent and
natural gas by greater than 50 percent.[2] Other consuming countries are also
entering the picture, in particular China and India. Their demand for petroleum
will complicate world demand and increase the price pressure within the United
States.

Nearly all Americans would agree that we need to be less dependent on fuel
imports. Although nobody knows when fossil fuel will run out, everyone knows
that it eventually will. So it’s better to begin finding an alternative now
than later. We can save $100 billion per year if we find our energy sources
internally (i.e., within the United States).[3] We can also reduce emissions
and strengthen our economy if we select the new source wisely.

Attention-Getter

Renewable energy is gaining increased attention because it’s just that – renewable.
Biomass, geothermal, hydropower, ocean, solar, wind and hydrogen are all natural
sources of energy. Recently hydrogen has been working its way toward the top
of the list. There was little interest in 1970 when electrochemist John O’M.
Bockris coined the phrase “hydrogen economy,” but the fuel embargo in 1973 spurred
an interest in hydrogen and the development of hydrogen fuel cells for conventional
commercial applications.[4]

In the 1980s, the Soviets flew a commercial airliner partially fueled by hydrogen
and an American flew a private aircraft propelled by hydrogen.[5] In the 1990s,
the transportation industry took an interest in hydrogen with the advent of
fuel-cell-powered buses and automobiles. In addition, acceptance of hydrogen
was improved when retired NASA engineer Addison Bain concluded that the Hindenburg
accident was not actually caused by hydrogen but by static electricity and the
highly flammable skin material of the airship.[4] As previously mentioned, Iceland
announced that it would be the first country with a hydrogen economy. Germany
has opened hydrogen fueling stations.

In this decade the automobile companies have continued to produce cars based
on either hydrogen or hybrid energy systems, and President Bush announced a
hydrogen fuel initiative in his 2000 State of the Union Address. The federal
budget in 2005 includes $225 million for hydrogen as a source of energy and
the proposed 2006 budget requests $260 million.

Usually we expect a debate to converge on a single optimum approach for as
serious an issue as the coming energy crisis, but just the opposite is happening.
It is diverging. Some say that cost is holding up the hydrogen economy, but
it is more likely that either indecision (or too many independent decisions)
is the culprit. That’s understandable when one examines what comprises the hydrogen
economy. The only single point that has (fairly) unanimous agreement is that
hydrogen is an optimum potential source of future energy. Beyond that, every
other component of the future energy equation is a variable, a variable with
many values. It should be no surprise that the average American is confused.

Generally, for renewable hydrogen resources, 25 kilograms of hydrogen displace
approximately one barrel of oil with a corresponding reduction in greenhouse
gas emissions of 3 kilograms of carbon dioxide for each kilogram of hydrogen.[6]
Thus, one realizes a double financial benefit from reduced air pollution when
one chooses clean methods of hydrogen production. So although the path is not
clear, the goal of a hydrogen economy has obvious merit.

Enter Hydrogen

Energy must be created, delivered and used, so there must be a source, production
techniques, a transportation system, methods of storage and types of consumption.

The Two Primary Sources of Hydrogen

We have all seen a drawing of the hydrogen atom, the first element in the chemical
table: a large globe, or nucleus, being orbited by a small sphere, the single
electron. But let’s not be led astray by hydrogen’s apparent simplicity. The
cliché “there’s good news and bad news” holds. The good news is that hydrogen
is incredibly abundant. It comprises 70 percent of the Earth’s surface,[7] 75
percent of the mass of the universe and 90 percent of the molecules.[8]

Although hydrogen is plentiful, the bad news is that hydrogen is it seldom
remains in its pure state, preferring to combine with other elements to form,
for example, water and hydrocarbons (such as methane), the two primary sources
for hydrogen today.

The 11 Hydrogen Production Techniques

To use hydrogen as an energy source, it has to be separated from the other
elements. That is, oxygen has to be removed from the water molecule, or carbon
must be removed from methane. Complexity arises from the fact that there are
at least 11 technologies available to do this[9]:

  • Steam methane reformation (SMR) (50 percent of hydrogen worldwide production);
  • Partial oxidation (POX);
  • Coal gasification (CG) (about 18 percent of hydrogen worldwide production);
  • Electrolysis (4 percent of hydrogen worldwide production);
  • Biomass;
  • Thermal cracking;
  • Photochemical;
  • Photo-electrochemical;
  • Thermochemical (there are six solar versions of these: thermolysis, thermochemical
    cycles, reforming, solar cracking, solar gasification and solar carbothermic
    reduction[10];
  • Thermal decomposition (high-temperature systems using solar thermal, geothermal,
    biomass and nuclear energy); and
  • Pyrolysis.

The first three are the most technologically “ready” but are not renewable
and depend on some form of natural gas, which means that they will eventually
become more expensive. They also produce carbon dioxide as a byproduct. (Remember
that hydrogen use is only a clean form of energy if the process to produce it
is also clean.)

The fourth technology, electrolysis, is the production of chemical changes
by passage of an electric current through an electrolyte. This can be accomplished
by various means such as solar, wind, nuclear and geothermal, and since these
are renewable, they should not increase in cost over the coming decades. When
water is the electrolyte, the result is hydrogen and oxygen.

Wind generation of electricity, as one example, has averaged a growth of 32
percent per year from 1995 through 2002. Some European countries are already
obtaining significant amounts of their electricity from wind, and as the available
energy produced in this manner has increased, so has public support. Its largest
obstacle is the lack of accessibility. If wind were our only energy provider,
hydrogen pipelines or electric transmission lines would have to carry the energy
from, for example, the Great Plains to the remainder of the United States.

Biomass (plant-derived materials) has been the largest renewable energy source
in the United States since 2000, providing 47 percent of all renewable energy
and 4 percent of the total energy in 2003.[11] It consumes agricultural and
forest residues along with other organic byproducts. Ethanol and bio-diesel
fuel – derived primarily from plant matter and agricultural products such as
corn – are becoming of increasing importance in the transportation industry.

The United States has the capability of supporting more than one of these techniques
for hydrogen energy production. The eastern half of the United States, for example,
has great potential for biomass and the western half for solar and wind energy.[10]
Any of the remaining six are also possible, especially if there are breakthroughs.

Transportation

Once hydrogen has been produced, it must be safely delivered to various parts
of the country in individual containers such as small cylinders for decentralized
use or through pipelines for centralized distribution. Regardless of the type
of distribution, the technical problems for both liquid (cryogenic) and gaseous
hydrogen containment are extensive. The hydrogen molecule is small, which makes
compressing and sealing its container difficult. Liquefaction requires reducing
the temperature and maintaining it at near absolute zero, a formidable task
but already done.

Other methods[12] of hydrogen distribution include chemical compounds, absorptive
metallic alloys, carbon or substrates. These may be more easily transported
than pure liquid or gas, but the technologies still need significant development.

Hydrogen can also be used to generate electricity directly for transmission
over an electric grid. At this time, the costs for the hydrogen pipelines and
the additional electric lines needed are comparable.[13] Regardless, additional
electric lines will inevitably be needed since transmission line saturation
is already occurring.[14]

Storage

Storage is currently considered the biggest challenge to the future of the
hydrogen economy. The problems associated with the storage of gaseous or liquid
hydrogen are similar to those associated with its transport, thereby raising
serious questions: Does the average American want a 10,000-pounds-per-square-inch
gas bottle in their vehicle? Is a hydrogen pipeline a security risk?

Other possibilities include chemical compounds, which offer a large variety
of mediums for storage. They are receiving considerable research attention.
Various metal and liquid hydrides, nanoparticles and carbon compounds are also
being investigated. Unfortunately, in many cases the advantage of the storage
medium is negated by the considerable energy that must be expended to extract
the stored hydrogen.

Uses

Hydrogen can be burned to produce energy in the form of heat or transformed
directly into electrical current. Using transportation (which consumes almost
two-thirds of our oil) as an example, vehicles can be powered by combustors,
hydrogen internal combustion engines (heat) or hydrogen fuel cells (electricity)
or a combination of any of those and a battery.

Thus begins the classic game of chicken and egg. The fuel supply method will
have a strong influence on the type of energy conversion system (combustion,
electric motor/generator) and vice versa. If automobile manufacturers knew for
sure which type of hydrogen fuel would be available for the consumer then they
could produce the vehicle. If energy providers knew what type of automobile
engine they were to service, they could supply the fuel. Which goes first? Which
is it, heat or electricity? No one wants to drive a vehicle without readily
available fuel. And no one wants to have a refueling station for nonexistent
vehicles.

The Future

It’s highly probable that if we were to tour a home in the year 2040, we would
never realize that we had entered the hydrogen era. When we turn on the light
switch, electrons will produce light and power the stove and cooling/heating
system. Except for the cleaner air, we might not even notice any difference
when we drive through the countryside. During our stop for coffee at the local
service station we would purchase fuel in our accustomed fashion before continuing
our journey.

In 35 years, it will all seem very simple and straightforward. But at the other
end of the power lines and at the beginning of the fuel line, very sophisticated
technology will be operating. Just what that technology will be needs to be
determined soon if it is to be available in 2040. When the light switch is turned
on, the electrons could be generated by a localized hydrogen combustor or from
a fuel cell in the home’s basement. They might also come from an external electric
grid. If it’s external, the grid could involve a fuel cell, solar or wind unit
that provides energy only for that home or for the housing area or industrial
complex, or it could be connected to a regional electric grid much as we have
today.

If the hydrogen economy seems confusing, all one has to do is review the numbers
to understand why. Mathematically, using just the technologies mentioned here,
there are more than 3,000 possible combinations of means for providing those
electrons at your switch. And it is undetermined how many technological barriers
are associated with those combinations. These are far too many even for a country
with the scientific resources of the United States.

Are these barriers hurdles or roadblocks? One generally goes over hurdles and
around roadblocks. Doing either is “just engineering,” but before the engineers
can go over or around, they have to identify and understand the barrier. Any
choices made, for example, between centralized or decentralized electric grids,
or gaseous versus liquid hydrogen combustors, or internal combustion engines
and fuel cells reduces the number of technical barriers.

Americans agree that something must be done to free us from dependency on foreign
oil and natural gas. Right now, with over three thousand possible solutions,
the number of potential barriers seems insurmountable. Decisions, commitments
and unbiased judgments need to come to bear to reduce those barriers to a reasonable
and solvable number before we run out of energy. What a shame it would be to
have our grandchildren sit in darkness, surrounded by something that comprises
75 percent of the world’s surface and be helpless to use it.

Endnotes

  1. “President’s Hydrogen Initiative: A Clean and Secure Energy Future,” www.eere.energy.gov/hydrogenandfuelcells/presidents_initiative.html.
  2. Report of the National Energy Policy Development Group, May 2001, U.S. Government
    Printing Office, ISBN 0-16-050814-2.
  3. Renewable Hydrogen Forum, A Summary of Expert Opinion and Policy Recommendations,
    National Press Club, Washington, D.C., Oct. 1, 2003, Presented by American
    Solar Energy Society p. 10.
  4. “The History of Hydrogen” Fact Sheet Series, Facts H 1.008. www.hydrogenus.com/History-of-H2-Fact-Sheet.pdf
  5. The Hydrogen Economy The Next Great Economic Revolution, Jeremy Rifkin,
    Tarcher/Penguin, New York, NY, 2003.
  6. Renewable Hydrogen Forum, A Summary of Expert Opinion and Policy Recommendations,
    National Press Club, Washington, DC, Oct. 1, 2003, Presented by American Solar
    Energy Society p. 42.
  7. Hydrogen Technical Advisory Panel (HTAP) U.S. DOE, “Fuel Choice for Fuel
    Cell Vehicles.”
  8. “Hydrogen.” The Columbia Encyclopedia, Sixth Edition. Columbia University
    Press, 2001.
  9. Renewable Hydrogen Forum, A Summary of Expert Opinion and Policy Recommendations,
    National Press Club, Washington, D.C., Oct. 1, 2003, Presented by American
    Solar Energy Society p. 42.
  10. Ibid. p.22.
  11. The Hydrogen Economy, Opportunities, Costs, Barriers and R&D Needs,
    National Research Council and National Academy of Engineering of the National
    Academies, The National Academies Press, Washington D.C., 2004.
  12. Ibid. p. 41.
  13. Renewable Hydrogen Forum, A Summary of Expert Opinion and Policy Recommendations,
    National Press Club, Washington, D.C., Oct. 1, 2003, Presented by American
    Solar Energy Society p. 16.
  14. Ibid. p. 59.

Electric Generation Capacity: What, Where and Who

The principal driver of energy consumption and demand growth in the United
States is increased population. The nation’s population is growing at a rate
of approximately 1 percent per year, as are total energy consumption and electric
consumption.[1],[2],[3] If the current trend continues, the U.S. population
will be more than 460 million by 2050 and more than 750 million by 2100. If
current electricity consumption and demand trends continue, electric generating
capacity would be required to grow by nearly 60 percent by 2050 and by nearly
160 percent by 2100. While projecting the past into the future hardly represents
strategic planning, it can fit Einstein’s definition of insanity: “Continuing
to do the same things and expecting different results.”

In order to do different things to gain diverse results, we must explore:

  • What electric generation technologies will be used to meet future demand
    for electricity in the U.S. and to replace the components of the current,
    aging U.S. generating fleet?
  • Where will new and replacement generation capacity be located?
  • Who will build, own and operate the generating fleet?

This set of decisions will most likely be affected by political and environmental
considerations, as well as by technology and economics. For example, the growing
interest and concern regarding emissions from electric generation facilities
includes: criteria pollutants (sulfur oxides, nitrogen oxides, particulates,
mercury and radioactive gases, which are already regulated) and gases not currently
classified as pollutants, such as carbon dioxide. Federal limitations (or caps)
on carbon dioxide emissions, particularly if established on an absolute rather
than a per-capita basis, would be the most daunting political and environmental
challenge. It would also have the greatest impact on the electric generation
technologies employed to meet the future energy needs of the U.S.

As an illustration, consider Senate Bill 139, the Climate Stewardship Act of
2003. This bill would have required the reduction of total absolute carbon dioxide
emissions in the U.S. to 1990 levels by 2016 and would have capped them at that
level in the future. The door is left open for further reductions and reduced
caps on future carbon dioxide emissions beyond those required by the Kyoto Accords.
This initial reduction and compliance schedule is four years longer than that
established by the Kyoto Accords, which stipulated absolute emissions reductions
to 7 percent below 1990 levels by 2012 in the U.S. While some analysts have
characterized achieving these reductions and maintaining these caps as achievable
and realistic (even in the face of a growing population), others seriously question
both the need to reduce carbon dioxide emissions and the economic consequences
of doing so to this degree and on this schedule.

The X Factors

While the United States Congress is considering energy legislation and environmentalists
are advocating carbon dioxide emissions reductions, energy consumption and demand
continue to grow. Therefore, assuming no change in current population trends
or in per-capita energy consumption trends, the 60 percent increase by 2050
in U.S. energy consumption and U.S. electric consumption and demand would require
the construction of approximately 400 gW of incremental generation capacity,
as well as the replacement of most of the 680 gW of existing generating capacity
serving the U.S. market by 2050. If carbon dioxide emissions were capped at
1990 levels, this cumulative generation capacity would be required to have an
average carbon dioxide emission level per gW of capacity 50 percent below the
carbon dioxide emissions rate of the current U.S. generation fleet. This reduction
in carbon dioxide emissions would require a combination of dramatically more
efficient power plants, a significant shift in the mix of generation technologies
serving the U.S. market and the application of new technology for the capture
and permanent fixation of a significant portion of the carbon dioxide that would
otherwise be released.

A second factor that will also have a significant effect on what power plant
technology will be used and where it will be located is the availability of
water resources. Potable water for personal consumption and agricultural irrigation
is already under significant pressure in many parts of the United States. Some
states are restricting the use of water for power plant cooling, requiring the
use of dry cooling towers on new power plants. (The availability of power in
parts of Europe during the summer of 2003 was limited by the unavailability
of adequate cooling water for the full-capacity operation of nuclear power plants
in France.) The projected growth in U.S. population will put further pressure
on potable water supplies, yet current critical infrastructure planning in electricity
does not take this constraint into account.

The need for additional potable water produced by the desalination of seawater
and brackish water may have an even more significant impact on the technology
selection and location of power-generation capacity. While current desalination
plants typically use the reverse osmosis process, this process is electricity-intensive.
The heat rejected by steam-cycle power plants can also be used to desalinate
seawater or brackish water, increasing the overall thermal energy efficiency
of the power plants as well. However, this joint application would require that
the power plants be located relatively close to large sources of nonpotable
water along or near coastal areas.

A third factor is the potential demand for hydrogen as a motor vehicle fuel.
Hydrogen can be produced using either electrochemical or thermochemical processes;
however, the thermochemical process can also use the thermal energy released
in nuclear steam-cycle power plants, thus increasing the overall thermal energy
efficiency of the power plants. Hydrogen can also be produced by reforming natural
gas with steam, but this process releases carbon dioxide, and its application
to hydrogen production will ultimately be limited by competing demands for natural
gas and by federal and state restrictions on natural gas exploration and production.

A Mighty Wind?

The growing pressure to increase the role of renewable energy in power generation
would place new generation resources in very different locations from fossil
fuel or nuclear capacity requiring cooling water, or using process heat for
water desalination or hydrogen production. Increased solar and wind generation
would also require significantly improved electricity storage equipment. A significant
increase in wind-powered generation would add an additional complication. According
to the wind power industry, equivalent wind generation capacity must be installed
at eight to 10 selected sites to assure reliable availability of 8,760 hours
per year, since wind availability at any specific location is inconsistent.
Alternatively, larger-capacity facilities could be installed at a limited number
of sites, in combination with significantly improved electricity storage equipment,
to compensate for the intermittent nature of wind itself. Transmission capacity
would have to be available to move power at peak capacity from each of these
multiple sites to the load centers served by the generation. Regardless, it
is clear that both solar and wind generation must transition from their current
role as “sources of opportunity” to reliable sources of power if they are to
provide a significantly greater share of U.S. electricity generation.

Significant increases in large-scale hydroelectric generation in the U.S. are
unlikely because of environmental concerns and the limited availability of potential
additional sites. Significant increases in biomass generation face the same
carbon dioxide emissions and cooling water availability issues as coal-fueled
generation.

Perhaps the wild card in renewable-source generation of electric power is geothermal
production from “dry, hot rock,” which is estimated to have the potential capacity
to meet all of our electric generation needs in perpetuity.[4] However, the
technology required to drill into this resource is not currently available.

A Modest Proposal

The challenges associated with the development of incremental and replacement-generating
capacity could be offset somewhat by a variety of factors, including:

  • Increased appliance and equipment efficiency;
  • Active retail and wholesale demand response, and market-based retail pricing;
  • Increased electric load factors, resulting from both active and passive
    demand management by electricity users; and
  • The increased use of onsite or near-site power systems for peak shaving,
    load sharing or combined heat and power systems.

Controlling the rate of population growth is another potential approach, although
it is fraught with political and ethical difficulties.

No single generation technology is likely to supply all of the new and replacement
generation capacity required over the next 50 to 100 years. The total capacity
requirement of approximately 1,700 gW would require approximately: 1,000 to
1,500 large nuclear power plants or Integrated- Gasifier Combined-Cycle (IGCC)
coal-fueled power plants with carbon dioxide fixation technology; 10,000 to
12,000 Combined-Cycle Turbine (CCT) natural gas-fueled power plants; 3,500,000
5mW wind turbines; or roughly 7,500 square miles of flat plate solar photovoltaic
collectors, with the associated storage and inverters. Serving this total capacity
with either coal or natural gas-fueled plants would exhaust the known reserves
of these fuels within the expected lives of the generators. These estimates
ignore the energy required for water desalination and hydrogen generation. While
much of this energy could be recovered from nuclear power plants, additional
generation would be required in the coal-fueled IGCC, natural gas CCT, wind
and solar cases, since adequate reject heat of sufficient quality would not
be available. NIMBY (not in my back yard) and BANANA (build absolutely nothing
anywhere near anyone) concerns will significantly increase the costs of developing
these new-generation resources.

The construction of new nuclear power plants presents an additional series
of very difficult issues. The nation’s nuclear-generation industry has been
plagued by high investment costs, the perceived risk of a nuclear accident,
delays in construction schedules resulting from regulatory re-evaluation of
plant design and construction requirements, the federal government’s failure
to meet its obligations regarding longterm waste storage, and issues regarding
evacuation plans. Concerns regarding nuclear plant vulnerability to terrorist
attack have recently been added to the mix. It is highly unlikely that any new
nuclear generating plant construction will occur in the U.S. until regulatory
process issues are resolved. Plant developers can be reasonably sure that plants
can proceed through the siting, design, construction and commissioning process
without protracted delays.

A Road Map

Assuring that the solutions adopted today to solve current problems are sufficiently
flexible and adaptive to solve tomorrow’s problems is a very difficult challenge.
This test is especially difficult because the acceptable parameters for tomorrow’s
solutions, while they can be discussed, have not been defined in law and regulation.
This means they cannot be known with reasonable certainty. In many cases, they
are also not defined technically, particularly in commercially available hardware,
especially hardware that can be tested and characterized, much less deployed.
However, identifying, analyzing and discussing these issues are crucial to ensuring
that they are taken into account in current planning. In the immortal words
of that great American philosopher Yogi Berra: “You’ve got to be careful if
you don’t know where you’re going, because you might end up somewhere else.”

Endnotes

  1. U.S. Census estimates. www.censusgov/popest/states/tables/NSTEST2004- 03.xls
  2. U.S. Department of Energy’s Energy Information Administration energy overview
    1949-2003. www.eia.doe.gov/emeu/aer/txt/stb0101.xls
  3. U.S. Department of Energy’s Energy Information Administration electricity
    overview 1949-2003. www.eia.doe.gov/emeu/aer/txt/stb0801.xls
  4. The Status and Future of Geothermal Electric Power, Charles F. Kutscher,
    NREL/CP-550-28204.

Service Delivery Management

Organizations today are challenged with simultaneously improving performance
and lowering costs. Many are discovering that the key lies in “reinventing service”
by adopting a service delivery management (SDM) strategy, which enables optimization
and effective management of customers, assets and workforce. By focusing on
operational business flows across organizations, and breaking down the information
silos that prevent true business transformation, this holistic and process-centric
approach to service distinguishes SDM from previous approaches that were solely
customer-centric or asset-centric.

The evolution of paradigms within the utility industry illustrate the growing
realization that processes are interrelated and must be considered in concert
for true effectiveness. This is shown in Figure 1. Because of this approach,
one can naturally assume that service delivery management involves systems integration.
Indeed, systems integration is a core element. However, the real essence of
SDM goes beyond integrating systems to:

  • Resolve the cultural and organizational barriers that impede transformational
    change;
  • Normalize the service components used by the organization; and
  • Optimize the key service delivery processes.

Thus, service delivery management is a systems integration strategy and a business
integration strategy, which can only be executed with the resolve and leadership
of the business owners of the service processes.

While service delivery management does involve systems integration, it is much
more comprehensive than solely the interconnection of multiple applications.
Systems integration, in this larger context, is not a trivial problem. As Dr.
Zarko Sumic, vice president for META Group, observed, “Although optimization
of the complex business processes spanning a number of lines of business offers
the largest improvement opportunities, it poses the greatest technical challenge.”[1]
The ability of organizations to technically and cost-effectively integrate applications
and business processes stands as one of the impediments to true business transformation.
Yet, it is key to enabling service delivery management.

Paul Greenberg, author of CRM at the Speed of Light, writes, “In order for
this model to work effectively, technology has to be fully integrated with both
existing systems and third-party systems and with the processes and data that
exist for making the service delivery management solution an appropriate and
successful one.”[2]

Defining Barriers

Integration costs alone have historically represented 30 to 50 percent of the
cost of a project. Therefore, it is imperative that the associated cost be reduced
and the process made more agile for significant value to be attained. Understanding
historic integration barriers and how these barriers can be addressed is essential
to solving the integration conundrum and achieving the interoperability required
for an effective strategy.

Barrier 1 – Need for ‘Process Aware’ Applications

Applications have typically lacked the breadth and depth of support for interactions
necessary across multiple systems and business entities. Interfaces are often
naive in the approach, supporting only a simple interface. The required available
methods are often too incomplete to build robust collaborative processes. Moreover,
organizations are typically limited to a “least common denominator” and are
forced to model the flow of information supported by the least robust application
in the collaborative process. Given this lack of process awareness, many organizations
resort to more costly custom interfaces, which in turn make systems brittle
and impede agility.

Barrier 2 – Lack of Common Semantics

Every time a boundary is crossed between applications, a price is paid. Because
each application incorporates various entities, objects or services within its
unique model, an account, a contract or a work order may be conceptually distinct
within each application model. Information assumed to be known to an entity,
object or service may actually only be known to another portion of the application
– or not known at all.

Bridging these semantic gaps can present some interesting challenges. An urban
legend holds that if you translate “the spirit is willing, but the flesh is
weak” to Russian and back to English, you get something like “the vodka is good,
but the meat has disease.” Whether true or not, the point is that crossing semantic
boundaries has a price and can prove challenging.

Barrier 3 – Number of Applications to Be Integrated

The cost and amount of integration work required goes up dramatically as the
number of applications increases. While strategies can be employed to eliminate
each connection requiring a point-to-point integration, complexity still increases
as more systems are used. Fewer may be better, but the Goldilocks principle
seems to apply. One giant, monolithic system comprising your entire business
may initially seem appealing, until you need to change. Then, changing just
one module requires a complete upgrade for the entire enterprise – an extremely
costly and inefficient approach. It is far more efficient to determine the point
that’s “just right.”

Barrier 4 – Technical Issues

The technical challenge is often people’s first concern, but in many ways it
is the least important. The barriers outlined above actually have much greater
impact. The barriers of the past that led to point-to-point integrations based
on the data model itself have given way to a steady stream of technical advancements.
These advancements were accomplished by applying distributed object-oriented
computing principles to the technical integration problem. The result has been
solutions providing higher levels of abstraction through application programming
interfaces. Middleware solutions have provided the ability to bridge the gap
between legacy solutions and the lack of uniformity in technical approaches.

Hurdling the Barriers

Addressing these barriers is a significant concern for organizations executing
a service delivery management strategy. The overall focus on systems integration
has marked a new dynamic that Sumic has referred to as the “era of interoperability.”
According to Sumic, “In the ‘era of interoperability,’ the principal technology
goal is a ‘frictionless’ integration environment focused on business processes
rather than systems. Because system boundaries often do not reflect process
boundaries, the combination of the process-oriented view and the emerging serviceoriented
architecture provides the ability to drive value through crossfunctional business
process optimization.” In order for organizations to realize the benefit from
this new era, thoughtful consideration must still be given to removing the interrelated
barriers of the past.

  • Process awareness – Solutions must be aware of the nature of interactions
    required in the context of broader business process – within and beyond the
    enterprise. This awareness implies that methods must be available that are
    robust and complete.
  • Common semantics – The common semantic challenge persists as a significant
    issue. Uniform semantic standards do not exist for most processes. Selecting
    solutions that embrace semantic standards where available, combined with selecting
    integration touch points that are known, is currently the best approach available.
  • Number of applications – Leveraging the full capabilities of suites that
    enable a service delivery strategy minimizes the number of applications that
    must be involved in supporting the business process. This approach allows
    organizations to deploy components that reduce integration complexity and
    cross boundaries at points where the level of semantic integration touch points
    are well-understood.
  • Technical – The advent of service-oriented architectures and the use of
    Web services mark significant advancement and uniformity in our overall technical
    approach to integration.

Collectively, the primary initiative should be acquiring service-oriented business
applications that provide coherent “pre-packaged” components providing support
for the imperatives represented in the service delivery domain – customer management,
asset management and field service management.

From the Business Side

More significant than system integration to service delivery management is
the execution of a sound business integration strategy. If achieving transformational
change and business value requires more than systems integration, what are the
missing elements? An integrated approach for service performance and service
execution – combining your customer relationship management, field service and
asset management strategies. True transformation will require the following:

Resolve the Cultural and Organizational Barriers That impede Transformational
Change

Business processes do not stop at the boundaries of an application, or at the
boundaries of an organization; they actually extend to your customers, suppliers,
contractors and partners. So, achieving transformational change requires breaking
down not only the application barriers, but also the people and information
barriers as well.

When optimizing business processes, think in terms of the processes that span
multiple organizations and where information created in one area is used by
another. Cultural and organizational issues are often what prevent true change
and impede progress. Examples:

  • Technicians with similar skills are unable to share workloads because of
    organizational/territorial and system boundaries.
  • Customer service representatives are unable to make specific commitments
    to the customer as to when work will be performed or provide updates about
    the status of work in progress.
  • Groups responsible for the execution of work and the reliability of the
    infrastructure have “supply chain clashes” with the financial groups responsible
    for the stewardship of financial processes.

Eliminating the suboptimization that results when independent groups make myopic
decisions in their portion of a process remains the most formidable barrier
to change. Often, this is a structural problem. No one has responsibility for
the complete service delivery process. Rather, fiefdoms exist for call center,
billing, field service, construction, maintenance, inventory and payables activities.
Transforming an organization to enable it to “reinvent service” requires breaking
down these cultural and organizational barriers and forcing organizations to
share information and enable processes that support the operational efficiency
of business.

Normalize the Service Components Used by the Organization

Once an organization is willing to address the organizational and cultural
challenges, it can begin to reconcile redundant or overlapping applications
– applications that fill the same role, but for different groups. For instance,
organizations often have multiple work management systems. While the IT inefficiency
is obvious, normalizing the application portfolio needs to be carefully thought
through. Too often, this exercise is driven from the IT perspective of simply
reducing the number of applications and integration points it needs to support.
Instead, it should be driven from a business process perspective. The guiding
principles for normalization are:

  • Business value that can be driven by re-engineering existing work practices
    through the use of a single application.
  • Ability of the application to manage all aspects of the role required to
    be filled by the application.
  • Capacity of the application to provide the depth of capability needed –
    not just to support existing practices, but “next practices” once the organization
    is prepared to move to higher levels of operating efficiencies.
  • Ability to tailor the application to promote efficiency based on the capabilities
    needed today and how those capabilities are used.

Optimize the Key Service Delivery Processes

Once complete end-to-end processes are enabled though systems integration,
value can be driven though the optimization of resources that drive those processes.
To optimize resources, first consider the interrelationship of customer-facing
and asset-facing activities and their relationships to work processes and supply
chain management processes. As Greenberg observed, “Field service is not simply
a ‘customer-facing’ set of processes, applications and actions. It cuts a massive
swath across all elements of business including supply chain, customer relationship
management, and enterprise resource planning [asset management]. It intersects
the demand and supply chains and the subsequent molding of the value chain.”[3]
The downstream processes that define work and support the execution of work
are critical to one’s ability to deliver service. But, to stop there tells only
part of the story.

While customers are a primary source of demand for the execution of work in
the field, there are additional sources, which tend to be oriented around the
maintenance of the infrastructure for the delivery of the utility service itself.
These activities tend to be less reactive, but are equally important to the
value chain as they support the continued reliable delivery of service. Conceptually,
all of these demand sources, together with the associated service-level commitments,
must be balanced with the supply sources that are necessary to complete the
work tasks. Supply sources include the availability of skilled labor, both inside
and outside of the four walls of the utility, as well as all of the associated
parts, tools, equipment, permits and instructions needed to complete the job
(see Figure 2).

Service optimization technology – employing a combination of advanced heuristics
and genetic algorithms – can consider the overwhelming complexity of variables
and constraints, combined with the nearly infinite potential order sequencing
combinations, to provide an optimal schedule and to react to changes in real
time. The power of these technologies is enormous. In fact, it is precisely
the lack of such tools in the past that perpetuated inefficiencies and served
as a primary barrier to transformation. The power of the tool goes beyond just
doing a better job of scheduling. It enables organizations to completely reconsider
how work is organized and how the skill sets within the business can be most
effectively leveraged.

Conclusion

Service delivery management is a philosophy for reinventing service by achieving
constructive operational transformation based on a processcentric paradigm.
It is a business integration strategy combining both your customer relationship
management (service performance) and asset management (service execution) strategies
and encompassing key operational strategies for:

  • Systems integration;
  • Cultural and organizational change;
  • Normalizing service components; and
  • Optimizing key service processes.

So, service delivery management is indeed an integration strategy. However,
it is not an integration strategy that can be solely executed by the technicians
in the IT department. It is a business integration strategy that must be addressed
and embraced in the boardroom.

Endnotes

  1. Sumic, Zarko, “Drivers for Business Process Management,” META Client Advisory,
    Sept. 14, 2004.
  2. Greenberg, Paul, “Field Service: Not Just a Cost Center Anymore,” March
    2003.
  3. Ibid.

 

Pollutants, Resources and Nanotechnology

The embryonic field of nanotechnology, which has been receiving much attention
of late, aspires to the control of matter at molecular and near-molecular scales,
in emulation of the capabilities of biosystems. Element separation technologies
are both an obvious early application of nanotechnology and an obvious motivator
for its funding, as high-value economic drivers exist. Moreover, some molecular-level
technologies are largely passive systems, and hence will be considerably simpler
to design and build than the elaborate “molecular machines” some envision as
the goal of mature nanotechnology, but which for now are the stuff of science
fiction.

At present, pollution control and purification are the economic drivers of
greatest interest to researchers and eventually to the energy industry. As separation
technologies mature, however, they will blur the distinction between a “pollutant”
and a “resource.” Recovered pollutants will begin to have an impact on resource
extraction. After all, zinc extracted from a wastewater stream, for example,
is zinc that does not have to be mined.

Indeed, nanotechnological separation technologies are likely to become “disruptive
technologies.” Such a technology is initially more expensive and less capable,
but becomes quietly established in niche markets where it has distinct advantages.
There it improves until it can invade established markets – to the surprise
of those in those markets (see Figure 1). A familiar example is high-end PCs
supplanting mainframes.

Also, many aqueous solutions, of both natural and artificial origin, become
nontraditional resources given mature separation technologies. Wastewater streams,
acid-mine drainage, seawater, concentrated natural brines such as those in oilfields
or saline lakes – all become potential sources of materials. The largest unconventional
tungsten deposit in the United States, for example, is the brines of Searles
Lake, Calif.

Molecular-level element separation technologies would be much more practical
if there were better control of their fabrication at the nanoscale. As it is,
much of their promise is currently unrealized. Indeed, one reason for the efficiency
of biological systems is that biosystems do organize themselves at the molecular
level.

The Thermal Paradigm

Let’s look at how it’s done now. Conventional technology used for resource
extraction and processing are thought of as intrinsically energy intensive,
due to the inchoate impressions of gigantic open pits, fiery smelters and armies
of heavy equipment moving tons of dirt. Even in the U.S., which imports a great
deal of its primary materials, primary metal and nonmetal production still accounted
for some 3.5 quads (quadrillion BTU) of energy consumption in 2003, about 4
percent of total U.S. energy usage, according to the U.S. Department of Energy’s
2004 Annual Energy Review.

Conventional resource extraction is so energy-expensive because it relies on
vast flows of heat: energy in its most wasteful and disorganized form. Elements
are separated by melting, crystallization, vaporization and so on; the desired
element might separate into a melt; be left behind while other elements are
driven off as vapor; be extracted out as a new compound crystallizes; or something
similar.

For example, iron is extracted by “cooking” iron oxide with carbon (coke).
At elevated temperatures the oxygen combines with the carbon, which wafts off
as gaseous carbon dioxide, leaving behind molten iron. Any silicate (rocky)
impurities form a melt, or “slag,” that is immiscible in the molten iron, analogous
to oil and water, and so can be poured off separately. Even so, the process
is not economic unless the starting material is nearly pure iron oxide. Despite
the fact that iron is the fourth-most abundant element in Earth’s crust, only
a tiny fraction of that iron could be a “resource” with current technology.

Copper is another example. Extracting copper typically begins with “roasting”
a copper-iron sulfide in air. The sulfur is driven off as gaseous sulfur dioxide,
which was vented in the old days but is now recovered because of pollution regulations.
(It is now used to make by-product sulfuric acid.) Left behind is a mixture
of molten copper metal and iron oxide. Silica (silicon dioxide) is then added;
it combines with the iron oxide to form an immiscible slag, which can be poured
off to leave the copper. The iron content is simply discarded. Furthermore,
roasting a sulfide ore wastes energy: it would be thermodynamically possible
to extract useful energy from the sulfide oxidation while also obtaining metal
as a byproduct. (Certain bacteria nearly do this.)

Not only are such processes grossly energy intensive, they are intrinsically
polluting, not just from the combustion necessary to generate the heat, but
also because the element separation is never complete. Some of the desired element
is always left behind. Moreover, byproducts containing geochemically abundant
elements are usually uneconomic and also discarded, as with the iron in the
copper smelting described above. This pyrometallurgy – smelting – hasn’t changed
in essence since antiquity; its only virtue is its relative simplicity. A further
source of expense is the preprocessing necessary to purify the ore. Crushing
and grinding dissipate a great deal of mechanical energy as heat, and physical
separation (“beneficiation”) of the ore minerals from waste minerals (the “gangue”
minerals) also leaves a lot of waste (“tailings”) that must be discarded.

The Biological Example

It is often claimed that the large energy costs of conventional resource extraction
are dictated by the laws of thermodynamics. Yet, biological systems can extract
materials with low energy expenditures. Plant roots extract both nutrients and
water at low concentrations from the surrounding soil. Vertebrate kidneys extract
only certain solutes out of the blood from a background of many other solutes.
For photosynthesis, plants extract carbon dioxide from the air, where its concentration
is only about 350 parts per million (ppm), and do so using only the diffuse
and intermittent energy of sunlight. “Shell builders” such as mollusks (snails,
clams, oysters and so on) extract calcium carbonate from the surrounding water
to build their shells. Diatoms, a type of single-celled alga, are particularly
impressive: their shells are a low-temperature silica glass, made from silica
extracted from the ambient water where it occurs at ppm levels.

Obviously organisms do not carry out thermally driven separation. Instead,
they literally move individual atoms or molecules, using specialized molecular
mechanisms. This is not only vastly less costly energetically but allows separation
from considerably lower concentrations.

Technological Approaches

Molecular-level separation is attractive for applications, such as pollution
control, for which thermal separation would be either prohibitively expensive
or would not work at all because of the low concentrations involved. In its
simplest form, molecular separation does require that the material being separated
be free to flow, as a gas or a liquid. Therefore, solutes in water solution
are technologically easier to deal with. For one thing, the issue of grinding
and crushing of solid materials do not arise.

Some approaches to molecular separation already have practical applications.
Semipermeable membranes are in essence “molecular filters” that strain out one
or more solutes. In reverse osmosis the separation is driven by a pressure difference
across the membrane, while in electrodialysis charged solutes (“ions”) are driven
across a membrane in response to an electric field. Both processes are now used
in the purification of brackish water and in desalination.

Molecular sieves, such as zeolites, can extract oxygen from the air. Instead
of oxygen tanks, for example, people requiring supplemental oxygen can now use
a machine that plugs into the wall that uses such a sieve to extract oxygen
from the air.

Ion exchange is a well-established technology for “swapping out” ions in solution.
Water softeners are a familiar example: calcium ions in tap water, which make
it “hard,” are taken up by the exchanger in exchange for sodium ions. Ion-exchange
materials include zeolites and various polymers (resins) that have electrically
charged molecular groups to which the ions are attracted.

Finally, in recent decades much research effort has been directed toward molecules
that can bind tightly with other chemical species to form highly stable structures.
Such molecules can be chemically linked to form a highly selective binding surface
for extracting particular solutes from solution. For example, the valuable metal
palladium is commercially recovered by dissolving scrap catalytic converters
in acid. Tethered molecules on a silica surface bind the dissolved palladium,
while much more abundant but nearly valueless metals such as iron remain in
solution. Current (early 2005) prices for palladium are around $200 per troy
ounce.

Solute Selectivity

As the above shows, extracting one particular solute out of a background of
many others is fundamental to a great many applications, both of environmental
and of resource interest. The solute might be valuable or toxic (e.g., lead,
which in dissolved form has chemical similarities to innocuous but much more
abundant calcium). Typically, too, the solute of interest is much less common
than all the others in solution. Not only would it usually be prohibitive economically
to extract everything, but in many cases it doesn’t solve the problem. After
all, the idea is to separate that solute out.

This is the problem of “selectivity.” Chemists have become very skilled in
designing molecules that bind only to certain solutes. In this, of course, they’re
only imitating what biology does already.

However, there is a major problem with “binding” approaches to separation.
Breaking up the binding so that the binding molecules can be used again typically
takes extreme chemical measures – ones that generate a much larger volume of
waste that in turn becomes a serious disposal problem. Indeed, further separation
step(s) are now typically required.

Switchable Binding

“Switchable” binding is a way to solve the “elution problem”: under one set
of circumstances binding occurs, but changing some environmental variable causes
the solute to unbind again. Again, biology has anticipated technology. Hemoglobin,
for example, binds strongly to oxygen in the lungs, but under the different
chemical conditions elsewhere in the body gives up the oxygen to the tissues.
The hemoglobin molecule actually changes its configuration in doing so.

A simple example of switchable binding is electrosorption, which is based on
straightforward electrostatic attraction and repulsion. Charging an electrode
attracts out dissolved ions having the opposite charge; reversing the charge
of the electrode desorbs the ions again. Electrosorption was first proposed
in the 1960s for desalination, but had remained impractical until the recent
advent of very high surface area electrodes. Since the “filled up” electrodes
look like a charged capacitor, too, a great deal of the electrical energy can
be recovered when the electrode polarity is switched to desorb the ions.

More subtly, certain electrodes can selectively take up particular ions, typically
because the crystal structure of the electrode has voids into which only those
ions will fit. The lithium ion, for example, is much smaller than the sodium
ion, and a process for drawing lithium ion into a form of manganese oxide having
lithium-sized voids has been patented. Sodium ion, which is always much more
abundant, cannot fit in the voids, so the lithium is selectively extracted.
Reversing the electrical polarity expels the lithium again. Since lithium is
both rare and has growing applications – as in lithium batteries – such a process
has obvious economic promise.

An alternative potential “switch” for binding is light. One approach envisions
using certain molecules that change their structure drastically on absorbing
light. Some research has been carried out on using that structural change to
go from a binding to nonbinding form. So far, however, the molecules used break
down too rapidly for practical separations.

A different approach envisions using the absorption of light energy by a semiconductor
surface. For example, a surface might adsorb ions from a solution in the dark,
but on illumination desorb those ions. Indeed, sunlight might be the trigger.
Merely illuminating a surface to desorb its solutes would obviously be considerably
cleaner and “greener” than flushing it with strong acid or brine solutions.

Conclusion

Molecular-based approaches to element separation constitute a promising economic
driver for nanotechnology research and development in the near future. Some
approaches are even now revolutionizing pollution control and resource recovery.
Moreover, not only will the difference between a “pollutant” and a “resource”
become one of context, but the present paradigm of “dig it up and cook it” for
resource extraction will gradually become obsolete. The obsolescence of that
paradigm, which has been unchanged for millennia, also means that the energy
costs of extracting primary materials will decrease drastically over the coming
decades.

This paper has been adapted from Nanotechnology for Clean Energy and Resources,
available online at the Foresight Institute: www.foresight.org.

 

 

Workforce Renewal: A CEO’s Strategic Imperative

The production of electricity and gas, the primary sources of revenue for most
utilities, requires a wide variety of assets. Most of these assets are mechanical
or electrical equipment – pumps, generators, transformers, regulator valves,
etc. – that have an expected service time before degraded function or failure
occurs. The potential financial losses due to a generator or transformer failure
(on the electric side) or a regulator failure or transmission line breach (on
the gas side) are obvious. Equipment at risk is identified, and sophisticated
techniques are used to monitor their condition. Companies develop programs to
determine when “end of service” is likely to occur, and budgeting and planning
activities are initiated well in advance of potential failures.

However, one critical and very near-term end-of-service issue is not receiving
the same level of scrutiny. The utility workforce retains a vast array of experience-based
knowledge that is specific to critical assets of the utility’s operations. But
this army of highly skilled, well-trained and hardworking men and women is coming
perilously close to its end-of-service date over the next five to seven years
(see sidebar). The aging workforce is as critical a part of utilities’ asset
profile as an aging piece of equipment, and the current scarcity of replacement
talent makes it arguably the asset in most dire need of immediate attention.

Cost cutting over the past decade left a substantial gap in the pool of experienced
internal talent to replace retiring workers. Layoffs and buyout programs were
used to cut workforces, vacated positions went unfilled and new hiring slowed
to a crawl. The willingness of younger, more mobile employees to move on in
the face of instability worsened this imbalance.

Unlike a transmission line or a cooling water pump, off-the-shelf replacements
for people cannot simply be procured. These new workforce assets must be recruited
and trained on the key operational and safety functions of the physical assets.
They also must be given incentives to remain with the company for the long haul
– otherwise, the utility runs the risk of acquiring assets with very short service
lives, a dangerous and costly proposition for any asset, physical or human.

Human Capital as Key

Current financial accounting measures treat investments in people as costs,
no different than raw materials. The financial accounting profession has long
argued that since people do not meet the tests of an intangible asset set out
by the U.S. Financial Accounting Standards Board (FASB), they cannot be regarded
as an asset. In recent years, cost reduction has been considered the mark of
good executive stewardship and rewarded in the short term by Wall Street. Executives,
pressured to cut costs across the board, had little alternative but to treat
the workforce as a cost available for reduction.

Thinking of people as assets with corresponding investments, value and returns
is a concept that executives may intuitively understand; in fact, executives
routinely declare that “people are our most important asset.” But this is where
intuition comes at odds with finance and accounting standards. For example,
human capital could not be considered an asset unless it met the following criteria:

  • It has productive capabilities;
  • It is controlled by the company; and
  • It was acquired through a defined transaction that gave the company control.

To resolve this conflict between human intuition and financial guidelines,
it is necessary to clarify the definition of the “human capital” asset. People
are, without question, the owners of their own human capital – their time, effort,
energy and enthusiasm. However, when people invest their human capital in their
work, it becomes an asset of the company. The company acquires this asset through
the package of rewards (compensation, benefits and work environment factors)
that it offers people in exchange for their human capital. Human capital, as
an asset defined in this way, now satisfies executive intuition, business practicality
and (at least in concept) meets the FASB standards for an intangible asset.[1]

However, rewards are not the only investment companies make in human capital.
Other expenses associated with managing human capital typically include administration
of human resources, recruiting, training, etc. These expenses are investments
in the asset management of human capital and are not dissimilar from the concept
of maintenance and repairs in the management of physical assets.

Asset Value

Assets require investment to generate value. Consider the value of a specific
physical asset – a turbine generator owned by an electric utility company. Its
“value” could be based on one of three characteristics:

  • Its purchase cost;
  • Its sale price on the open market today; and
  • Its production, i.e., the value of the energy that can be produced by it.

For a going concern (i.e., an operating utility), the third characteristic
provides the truest measure of value. However, for accounting purposes, asset
value is recognized as the first characteristic, adjusted over time on the balance
sheet as the remainder of the purchase cost after depreciation is subtracted.
The value of the second characteristic is applied to an asset in liquidation,
not one in a going concern.

Shareholders are interested in going-concern value, not liquidated value. (When
firms liquidate, shareholders are “last in line.”) Calculating this value requires
evaluation of both the throughput – the ongoing output that has economic value
– and the inputs that the organization’s assets convert to throughput. For a
turbine generator, the value of the economic output is easy to compute; it is
the kilowatt-hours produced times the price per kilowatthour the electricity
market (regulated or unregulated) will pay for the power. The cost of the inputs
includes the total expended on fuel, subcontracted services, etc., in the process
of producing throughput using the asset.

Yet, it takes more than a turbine generator to make a kilowatt. There are two
other critical assets required for the production function – technology capital
and human capital. If one of the required sets of assets is missing, no energy
is created and no revenue is generated, and the throughput (output that has
value) of the system is zero. This is a critical concept: Assets have value
because throughput has value.

Since physical, technology and human capital (the organizational assets) must
perform in combination in a production function to transform inputs to throughputs,
an overall equation for return from a going concern would use them in the following
way:

The throughput minus the input is the value of the assets in a going concern
provided the investment in assets and generation of value are “periodicity-matched”
– i.e., the value is measured over the useful life of the assets. Depreciating
and amortizing expenses is one technique for reducing assets to a one-year investment
equivalent that tends to match the time frame for standard annual reporting.
But human capital is not treated as an asset today. If it were, there would
need to be an estimate of its useful life.

But what is the useful life of human capital? Remember, human capital is not
people, but rather what people invest in their work. Therefore, the useful service
life of any investment in human capital will always be bound by two parameters
– the nature of the investment and the length of service of the employee.

ROI on Human Capital

When employees go through certain types of training, or when executives are
relocated, companies make large investments expected to pay off over time. The
payoff comes from having the human capital performing well in the business operation
in combination with the other assets. Because the assets’ value is based on
their performance in combination, the value of human capital is derived from
the value of all the assets.

The theory of production functions states that for optimized assets, it is
not possible to trade a dollar of investment in one asset for a dollar of investment
in another to get a better return. Technically, when this condition exists,
value can be assigned in proportion to the investment. This decomposition of
value can be helpful to making investment decisions for human capital when the
production function is fixed, i.e., the work intended to be performed by human
capital (versus that performed by plant and equipment) is clear. The value of
this work in an optimized production function is:

To finish with a calculation of return on investment in human capital (ROI-HC),
the investment in human capital alone needs to be isolated from the numerator.
Dividing the value of human capital calculated above by this investment in human
capital produces the ROI-HC.

Today’s power plants generally perform, in historical context, at high efficiencies
and low cost. The performance of transmission and distribution systems lies
in a similar historical context – the huge investments in recent years in analysis
and IT have brought the industry to a higher standard of performance than ever.
So while the optimized asset assumption may not be strictly accurate (or else
the discipline of asset management would have been retired by now!), the concepts
underlying these equations are reasonable assumptions for use in modeling human
capital’s part in utility asset valuation today.

Once ROI-HC is calculated, the natural next question from boards or investors
will be, “Is this good?” The answer lies in benchmarking where the company stands
with respect to its peers. The value of a company and the strength of its management
team are determined by what decisions they make in the context of the industries
in which they operate. Investors want to own shares in companies that have management
teams that understand the trends, make keen investments and position themselves
for the future.

While companies in the lowest quartile in ROI-HC have much work to do, even
those in the top quartiles may find their performance falling precipitously
should they fail to manage the transition of their current aging workforce.
Putting investments in the context of their value is critical to knowing how
much return a company is generating. Putting their ROI-HC in the context of
industry performance is critical to knowing if the return is good.

The Strategic Imperative for Workforce Renewal

Failure to plan for this transition will likely result in a workforce that
will be under-equipped to perform their jobs, less productive and more error-prone.
Human errors or inefficient activities that lead to generation, transmission
and distribution losses have clear financial implications. When the increased
likelihood of such problems is considered, a case can be made that the revenue
at risk from the potential loss of talent will dwarf the investments in successfully
managing the transition of a retiring workforce.

Each organization faces a critical strategic choice: continue to rely on the
strategies and tools of the past, or recognize the competitive advantage that
can be gained by developing strategies to manage the new reality. Historically,
utility investors have placed a high value on stability and predictability.
More recently, however, utility investors have also been conditioned to expect
ever-decreasing cost structures – and there is little question that effective
workforce renewal programs will cost money in the short term. How effectively
the long-term vision is communicated to investors will be crucial to assuring
that the strategic and financial benefits of good long-term programs are not
hijacked by the desires of short-term investors.

With new, more relevant measures, executives can lay out a clear and compelling
case for investments in human capital. Furthermore, managing expectations conveys
a command of the sources of long-term value for the company. After all, investors
looking for long-term sustainable value vis-à-vis other investment alternatives
are attractive shareholders for a utility. Demonstrating that proactive steps
are being taken to mitigate risk may not only show the value of management’s
foresight and judgment, but may spark investors to ask questions of other companies.
Savvy executives must ask themselves, “What will we say when this dialogue starts?”

The Aging Workforce

Dozens of articles have been published about the graying workforce in
the United States. A recent U.S. Department of Labor study put some hard
numbers around this looming workforce problem. Between 2002 and 2012,
the Labor Department projects:

  • The percentage of the civilian workforce older than 55 – considered
    the age at which retirement is a consideration – will increase from
    14 percent to 19 percent;
  • The number of workers in the 55+ age group will increase by nearly
    50 percent (20.7 million to 31.0 million);
  • The number of workers in this age group will be 78 percent higher
    than the total increase in the workforce over this period of time; and
  • The increase in workers 55 and older will actually exceed the increase
    in working-age people between 16 and 55 by 3.3 million – the older group
    will grow at a rate nine times faster over this period.[2]

The utility industry has only recently begun to evaluate the impact on
its operations. Initial studies indicate that utilities may be hit even
harder than the general population by workforce attrition. A UTC (United
Telecom Council) Research study of the utility workforce (including telecom
workers) published in 2004 determined that nearly half of this workforce
is over the age of 45.[3] (For comparison, note that the Labor Department
numbers show that for the overall workforce, the percentage of the workforce
over 45 is 36.8 percent.) A study completed in 2004 by the Nuclear Energy
Institute showed that, within the nuclear power sector, three-quarters
of the workforce is over 43, and at least a quarter of these workers –
approximately 16,000 – are expected to retire by the end of the decade.[4]

Some internal studies by utilities show similar alarming results. In
an interview conducted by IBM in 2004, the workforce planning manager
for a large public utility revealed that, after years of aggressive workforce
reduction and minimal new hiring, the utility discovered at the end of
the 1990s that average age of craft, engineering and operations staff
was 47 – and their average retirement age historically had been around
55. This served as the impetus for an extensive (and award-winning) effort
in their nuclear division to make bold moves to rectify the problem.[5]

The UTC Research study noted that while the vast majority of utilities
were aware of the problems these trends posed, less than one-third had
formal workforce-retention programs in motion. These numbers imply that
focus in this area needs to be increased to forestall serious problems.

Conclusion

There is no question that an aging workforce presents challenges no less critical
(and arguably more critical) than an aging turbine generator. It has strategic
and financial impact, and utilities cannot wait to see how things play out.
This workforce transition will happen in the next five to seven years, hitting
some segments (e.g., nuclear generation, transmission and distribution line
maintenance) especially quickly.

The workforce renewal crisis presents utility executives with a serious dilemma,
but it also presents them with a golden opportunity. Visionary executives who
choose to manage human capital as an asset rather than a pure cost may discover
that applying disciplined techniques to human capital can yield the same dramatic
productivity improvements they have seen with physical asset management. Since
productivity is rooted in returns on investment, the performance of any asset
that is managed with an outdated strategy can be improved with a new focus on
its strategic function and impact on financial value. Achieving that understanding
and applying it to the impending workforce transition is the opportunity within
the crisis that successful executives of the next decade will exploit.

Endnotes

  1. DiFrancesco, Jeanne, “Managing Human Capital as a Real Business Asset.”
    IHRIM Journal. March 2002. “Human Productivity: The New American Frontier.”
    National Productivity Review. Summer 2000.
  2. Toossi, Mitra, “Labor force projections to 2012: The Graying of the U.S.
    Workforce,” Monthly Labor Review, U.S. Bureau of Labor Statistics, February
    2004.
  3. From summary of “The Aging Workforce and U.S. Utilities Industries,” UTC
    Research, United Telecom Council, March 2004.
  4. “2003 Workforce Survey – Findings and Recommendations,” Nuclear Energy Institute,
    January 2004.
  5. Valocchi, Michael and Juliano, John, “The Aging Workforce: What Can You
    Do About It?” Electric Light and Power, January 2004.

 

 

The Next Generation of Energy Trading

Investments in energy trading, curtailed in the wake of the turbulent years
following 2000, have begun to swing back, and agile companies are identifying
new models to improve their results and establish competitive advantages. This
new generation of energy trading will require risk-centric and control-oriented
processes and technologies. Companies must increase the maturity of their internal
capabilities in order to differentiate and position themselves in the evolving
marketplace. The movement of financial institutions into the vacated energy
trading market signals growing opportunity, and utilities will be required to
work harder to achieve credible and competitive energy trading and risk management
capabilities.

Evolving Imperatives of Energy Trading

Corporate interest in energy trading in the U.S. has begun to recover from
recent hibernation; the emerging models, however, have fundamental differences
from their predecessors. In addition to the traditional operationsbased companies
starting to test the waters, financial institutions have begun to fill the energy
trading and services void as they identify new opportunities and space in the
market. This market recovery is creating requirements on energy companies to
focus on better management of the risks and regulation in their core businesses,
as well as respond to growing competition in their markets.

Given the evolution of the industry and competition, companies are facing several
strategic imperatives as they reassess their as-is and needto- be capabilities
in the market and seek to define the scope of their trading activities and capabilities.

Energy trading has evolved to be risk-centric, as opposed to profitcentric,
mandating a new level of systems and data integration and transparency. As financial
institutions explore new roles in the energy market, utilities must recognize
the importance of meeting the challenges head on. Energy companies that wish
to be successful in trading must develop their capabilities to better analyze
the risks and opportunities present in their trading books on both an aggregated
and disaggregated basis and make decisions based on this information in accordance
with the firm’s strategic direction. According to recent META Group research,
“Accountability is now more important than trading volume,” and this is leading
companies to focus on improving data management, extraction and reporting capabilities.[1]

Energy trading operations must raise their capabilities to enable rapid response
to changes in regulation and competition. Corporations that have invested in
clarification of processes and ensuring oversight in response to regulatory
mandates must now work harder to confront an evolving market and increased competition.
Strategic corporate priorities and the corresponding internal resource capabilities
need to be weighed against the potential risks of failure to comprehensively
manage market and regulatory risks.

Consistent gaps in process, information management and reporting must be corrected.
In addition to trading and risk management, decision makers, executive management,
corporate boards, regulators and stakeholders will continue to require an auditable,
safeguarded base of information to support business continuity of trading operations.
The drive for operational excellence and improved financial performance will
force companies to integrate and optimize business processes across functional
areas while using existing legacy applications.[2]

Evolution of technology is increasing the speed of information integration
and derivatives calculations, opening new opportunities and raising regulatory
expectations. As information integration software enables the combination of
information previously housed in disparate legacy systems, it creates both the
ability and responsibility to consider potential revenue generation opportunities
within an acceptable risk-reward curve. Enterprise risk management is breaking
traditional product silos, integrating risk analysis with customer, geographic
and product analyses.[3]

Critical Success Factors

These strategic imperatives are creating the need for companies wanting to
participate in energy trading to re-evaluate certain critical success factors
and develop plans for improving their abilities to meet increased competition
and evolve quickly to meet new challenges. These factors are presented in Figure
1. Although each area is important to increasing competencies and creating competitive
advantages in energy trading, three aspects are particularly important at this
time.

Organization

The ability to continue to identify efficiencies in operating models continues
to be an important focus for energy companies. As energy trading activity increases,
prior cutbacks in organizational capabilities must be reconsidered. Examine
consistency and alignment between roles, processes and incentive plans, to verify
that all areas of the organization are working to contribute to the same strategic
goals. Increased focus on business performance management (BPM) via executive
dashboards is enabling business leaders to measure efficiencies, establish goals
and achieve quantifiable progress.

Risk Control

Regulation at federal, state and local levels will continue to require focus
on operational controls, accountability and reporting. Companies should verify
that reporting requirements are automated to reduce costly manual data consolidation
or analysis. In addition, energy trading companies need to elevate their ability
to quickly and accurately quantify and analyze risk (market, credit, operational)
to a level that paces them astride their new financial institution competitors.

Integrated IT Systems

IT systems are crucial to all phases of energy trading, from acquisition of
market information, analytics and trade processing to portfolio optimization
and sophisticated scenario analysis, as well as the obvious reporting and back-office
processing. So critical to effective energy trading are improved IT capabilities
that, according to META Group, 70 percent of companies are increasing spending
in energy trading support systems. Competitive advantage will be gained by companies
that also use technology to gain visibility to enterprise risk and market opportunities.[4]

Toward a Higher Level of Maturity

Important components and levels of competency in energy trading can be evaluated
along a maturity curve. Particularly important is the realization that although
an integrated technology platform is a critical step in attaining high levels
of maturity, it alone is not capable of generating value without strong underlying
processes, roles and incentive systems.

Level 1 maturity has been attained by the vast majority of utilities as a fundamental
part of operations resulting from focus on regulatory issues and risk controls
implemented over the past several years (see Figure 2). How consistently companies
build upon these underlying policies and processes and generate true business
value will determine their success moving forward.

Most utilities are currently operating at a Level 2 maturity, and many are
working to verify that roles, responsibilities and reward structures are consistent
with corporate policy. Reductions (or elimination) of skilled professionals
in trading, research, analytics, control and processing functions have resulted
in increased workloads and multiple roles for some trading staff. Incentive
payment structures need to be aligned with measures of corporate value, and
leaders require accurate information to correctly recognize and reward desired
results.

At Level 3, an integrated technology base becomes a strategic imperative, enabling
integration of information sources, reducing manual efforts required and providing
a flexible basis for developing new products. Although some utilities have invested
heavily (and not always successfully) in new systems implementations, the majority
of companies are assessing their capabilities to quickly integrate existing
systems via data warehouses, middleware or integrated energy trading applications.

Level 4 maturity is attained only by firms with integrated organizational structures,
processes, analytics and straight-through processing which is designed to enable
seamless end-to-end trade processing and control, from market research to regulatory
reporting. Once at this level, companies have the flexibility to rapidly respond
to market changes and opportunities and have established processes of continuous
improvement to support adjustments as market conditions change.

Next-Generation Platforms

The next generation of energy trading platforms will respond to the imperatives
of regulatory oversight, increased competition from traditional and nontraditional
segments, sector convergence, mergers and consolidations and continuous change
in the business environment. Mature energy traders require new business and
technological capabilities based on new systems architecture requirements. Trading
organizations will require prompt access to various components integrated into
a consolidated trading platform with underlying data warehouses and interfaces
for access to market information and back-office systems. Speed to market, flexibility
and integrated services will be paramount.

Legacy System Transformation

Energy trading organizations that lack the appetite for investments in new
trading systems are adopting strategies to transform their existing systems
so they can introduce flexibility and portability not currently present. Utilizing
new technologies to unlock the value resident in legacy systems, companies are
leveraging enterprise architecture solutions and portals to reduce laborious
systems integration efforts, to reduce the need to modernize interfaces and
to solve data synchronization issues. Established methodologies focused on application
consolidation, renovation and Web enablement are bringing results at reduced
costs and reduced risk to ongoing operations.

Service-Oriented Architecture and Web Services

The goal of service-oriented architecture (SOA) is to “componentize” key business
processes so that they can be more easily changed to meet new business conditions
while also lowering the cost needed to manage and change the application. Componentization
of business processes and application functions create opportunities for collaborative
development and permits integration of established business rules with new applications.
Energy traders will have the ability to graft new functionality to the existing
suite of applications and diverse databases. Energy trading organizations have
invested heavily in legacy applications that often have achieved neither the
desired capabilities nor ROI. Web services allow true plug-and-play capabilities,
giving the energy trader the ability to implement new components or functionality
quickly and cost effectively as the business environment evolves. In addition,
SOA gives the energy trading organization the ability to achieve true BPM functionality;
more than real-time position management, it is proactively managing the portfolio
to a set of enterprise performance measures (as opposed to reacting to stale
indicators). Energy traders utilizing BPM concepts do not wait until the end
of closing periods to know their integrated key performance measures. BPM is
designed to provide nearreal- time access to the performance measures and the
ability to assess the impact of potential decisions on these measures.

Data Warehouse Integration

The importance of seamless data access (both transactional and market information)
to energy trading cannot be overstated. In the past, much of the data created
or acquired was simply dumped into discrete databases. Because of regulatory
pressure, more data integrity control has been introduced; however, many organizations
have yet to realize the full benefits of having a componentized data service
capability, enabling and controlling access to all corporate data. Next-generation
architectures must provide tools that will expedite the flow of business information
to the critical decision-making processes and support enterprise value optimization.
These tools are becoming a necessity to identify and evaluate potential trades
and their impact on the trading book and on the enterprise portfolio. The capability
to quickly and accurately assess the impact to credit risk, liquidity and market
risk enables comprehensive scenario-building to support risk management and
enterprise value optimization.

The next-generation energy trading platform offers companies the ability to
perform in the new risk-focused environment. The ability to integrate information
from trading platforms with information from other corporate systems will be
a critical factor in determining which energy traders will be key players and
which will cede space to more capable competitors.

 

Outsourcing Becomes ‘Partnering’

The energy and utilities marketplace is one of the most challenging business
environments that exist today, and it is also an industry sector where business
transformation outsourcing (BTO) is helping companies overcome many challenges.
These pain points include: aging infrastructure assets; falling revenues and
net incomes; impoverished pension funds and rising healthcare costs; corporate
scandals that are casting doubt on deregulation, which utilities have eagerly
sought, and even raising the specter of re-regulation; credit downgrades; and
customer demands – from increasingly complex and variegated markets – that are
becoming ever harder to fulfill.

It is an industry environment where most major players are looking for new
ways to focus on their key market challenges and become more flexible in their
ability to respond to market and industry issues and events in real time. Increasingly,
leading companies in the energy and utilities sector are turning to BTO to implement
the large-scale changes needed to support growth, cut costs, manage risk, increase
organizational agility and develop the necessary capabilities to be competitive.

To implement these large-scale changes, many energy companies are aggressively
pursuing a range of initiatives including:

  • Revolutionizing capital and operations and management budgeting processes;
  • Reducing costs while at least maintaining reliability and service levels;
  • Integrating business and technology for optimum flexibility;
  • Improving knowledge management;
  • Proactively managing regulatory relationships;
  • Effectively managing the entire portfolio of assets; and
  • Building a performance-based culture.

A key strategy for executing these types of initiatives is BTO. BTO is a service
that can deliver faster, more successfully and endure business transformation.
This is achieved by leveraging the expertise and scale of a partner who has
a comparative advantage in operating processes – a necessary trait when companies
look to pursue a broader strategic agenda.

The Difference Between BTO and BPO

Business transformation outsourcing has a broader scope and a deeper impact
on shareholder value and encourages far greater collaboration with external
providers than business process outsourcing (BPO). BTO is used to execute large-scale
change projects designed to increase shareholder value. BPO is primarily a tool
to cut costs and is applied to organizational processes on a piecemeal basis.
Rather than simply taking on support functions, as is typically the case in
a traditional outsourcing relationship, a BTO partner will collaborate to integrate
business processes back into the client organization, thus facilitating real
enterprise-level change. The BTO partner is involved at the strategic level,
and so is in a position to help clients enhance their core competencies.

As a large-scale transformation technique, BTO is designed to provide step
changes that can deliver managed growth, together with the agility to sustain
benefits in the face of competition. In the short term, conventional outsourcing
can deliver lower costs through scale and labor arbitrage benefits, but these
are likely to be eroded over time.

How BTO Delivers Business Results

There are groups of global companies that have successfully used BTO to execute
a broad range of strategies with strong results. The application of BTO has
been wide ranging: from improving business focus and entering new markets to
enhancing control over business units and completely changing operating models.
A common thread is that by sharing transformation with an external partner,
BTO has allowed these companies to make major changes on more than one front
simultaneously.

The result for these companies has been the creation of new capabilities that
have enabled the implementation of a broader strategic agenda.

Managing Business Processes for Williams

Williams, a leading natural gas producer, processor and pipeline company, and
IBM have partnered to help transform and manage certain business processes for
Williams. These business processes include accounting, finance and human resources,
as well as key aspects of Williams’ information technology, including enterprisewide
infrastructure and application development.

IBM Business Consultants will work with Williams to apply redesign and best
practices to these processes, in such areas as accounts payable, fixed assets,
general accounting, payroll, compensation and benefits administration.

Williams expects through BTO to reduce costs more quickly and at levels beyond
what the company projected it could accomplish on its own. The BTO arrangement
is also expected to improve Williams’ ability to adjust its support operations
as business conditions dictate while maintaining the high quality of service.

In providing Williams with a new level of HR, financial management and IT services,
IBM will deliver consulting methodologies, transformational technology solutions
and delivery skills. With IBM managing these business processes in a responsive,
flexible manner, Williams can focus its resources on core market and industry
challenges.

Innovative New Enterprise Models

Enel’s Innovations

Enel, for example, is one of the world’s major electricity companies and the
main operator in Italy. In 2001, Enel launched a unique and challenging project:
the creation of a new, intelligent network through the replacement of electromechanical
meters with remotely read and managed digital meters. The project involves 30
million interval meters. The system is mainly based on power line communication
using the existing low-voltage electricity network.

The new infrastructure provides Enel complete monitoring of its low-voltage
network, including information on the location and nature of faults. In the
event of power rationing, the system allows power curtailment, reducing the
maximum power available for the customer.

Beyond this functionality, the new infrastructure allows Enel to reduce the
cost of customer management more than 40 percent, while increasing customer
retention. The system also permits the company to reduce energy losses and customer
disputes. This gives Enel a ready platform for offering new services directly
to homes based on customer segmentation.

Enel and IBM are applying these technologies and methodologies through BTO
to a range of utility companies. For example, ASM Brescia, a multi-utility company,
recently announced that it is working with IBM consultants and Enel to develop
a comprehensive, integrated automated meter management (AMM) system.

Under the terms of the agreement ASM Brescia’s service offerings will transform
through BTO, to the remote management of both electricity distribution and other
services provided to more than 200,000 customers in the Brescia area, while
helping to improve the efficiency of those services delivered. To achieve this,
IBM will integrate 200,000 automated electronic meters, linking them directly
to ASM Brescia’s billing and customer service systems, and replacing all of
ASM Brescia’s traditional meters by 2006.

ASM Brescia customers will reap significant advantages as the new solution
allows meters in customer homes to be entirely remotely managed. ASM Brescia
customers will no longer be required to wait for a meter reader, and the monitoring
of consumption can be done in real time with accuracy. Over time, the solution
will allow the introduction of highly customized, flexible pricing options,
bringing even more control over energy consumption to ASM Brescia customers.

Embraceable BTO

Whether finding new ways to manage business processes to improve enterprise
responsiveness while lowering overall costs or applying the latest in analytics
and emerging technologies to create new business models, forward-thinking energy
and utility companies are embracing BTO.

Look for a BTO partner that continues to develop and acquire the necessary
business process skills and business innovation capabilities. Look for a BTO
partner that couples these with its existing technology and application management
expertise, so that they are in a unique position to help clients solve complex
business transformation challenges. As industry pressures continue to emerge,
and as traditional industry assets continue to age, this ability to work smarter
will help utility and energy companies meet the transformation challenge.

 

 

Benefits Realization

Pick up any current periodical – from CIO magazine to Fortune to BusinessWeek
– and you only have to skim the table of contents to find articles titled “Getting
the Most Out of Your IT Dollars” or “How to Improve the ROI on IT Projects.”
Furthermore, the literature is full of disappointing statistics, such as between
60 and 80 percent of all IT or business transformation projects fail to deliver
any measurable economic value.

These statistics, as well as concerns from CFOs and other executives, should
have CIOs, IT practitioners and management consultants concerned. The real key
is to understand the underlying causes of such failures and what can be done
to prevent them from recurring.

Transformation initiatives often produce business cases that tout substantial
cost savings. In today’s environment, however, transformation efforts need more
than an emotive business case to succeed. They also require flexibility and
formal accountability. Many large projects, unfortunately, seem to fail to produce
tangible benefits because they are not systematically measured and tracked.
Implementation of a benefits realization program can help resolve the common
difficulties in achieving and realizing ROI from transformations efforts. Not
only does benefits realization provide a formal, sustained process for identifying
and capturing the increased revenues, it also helps reduce costs resulting from
business transformation efforts.

Hit by a double whammy of decreased margins and increased competition, it was
clear that IBM, for example, had to make significant strides to survive and
regain competitiveness. Simplification became IBM’s broad solution to a changing
marketplace. The next step was to decide how to accomplish this efficiently.
IBM’s specific objectives became to transform, integrate and Web-enable the
core business processes to increase revenue and profits, reduce costs and enhance
customer satisfaction and loyalty. Over a five-year period IBM increased time
to market by 75 percent, customer satisfaction jumped 5.5 points and total savings
exceeded $9 billion.

Another example is the success of a benefits realization approach at a global
media and entertainment company. The project trigger was a global SAP implementation
in finance, HR and IT across all the business units. This project had two key
objectives. The first objective was to determine an accurate headcount in finance,
HR and IT within each business unit. The second objective was to realize savings
in finance, HR and IT by FTE reduction. The formal benefits realization process
has been running for three years and has provided credible benefits realization
documentation that allows for monitoring and tracking of the savings. The actual
cost of supporting the business units for finance, HR and IT is also now visible
to the company head office through the controller. Additionally, the client
is on track to exceed original business case commitment of $130 million in annual
labor savings.

While tens of billions of dollars in savings can be attributed to instituting
formal benefits realization programs, experience has also shown several recurring
pitfalls to watch for.

Challenges in Today’s Business Environment

Most transformation or IT systems implementation programs that have failed
have done so due to the inability to clear hurdles in four broad areas:

  • Change management;
  • Benefit commitment and ownership;
  • Execution; and
  • Realizing results.

Change Management

Lack of a proper and sufficient change management program can impede transformation
due to underestimation of efforts required to make and sustain changes in aggregate
patterns of behavior. In a successful transformation program, two aspects of
change management are critical. First, change management needs to secure leadership
commitment and preparedness. Second, it must maintain clear, objective, constant
communication with stakeholders. To enable a successful transformation your
organization must prepare everyone – from top management to the end user – in
order to foster positive results. Based on experience, IBM has noted numerous
failures due to the dangerous assumption that everyone is at the same level
of readiness, and the failure to manage expectations.

Benefit Commitment, Incentive and Ownership

In most transformation programs, a recurring theme of insufficient or limited
executive commitment has plagued the success of the program. Top management,
including the C-level, must be personally involved and demonstrate an unwavering
commitment to the transformation program in order to support its success. This
issue is crucial, and its absence can trigger a failed transformation program.

In the past, transformation programs have also struggled due to lack of clear
identification of “benefit owners.” In order to help ensure success, each benefit
must be owned by an individual who takes responsibility for capturing and reporting
the savings attributed to that specific benefit. This person should receive
incentives for achieving benefit goals and be responsible for missing savings
targets.

Execution

At crunch time, schedule, priority issues and budget concerns have often derailed
the execution of the transformation program. Even the best of intentions are
challenged by the pressing reality of implementation issues. Implementation
issues tend to fall into two main categories. First, change fatigue – essentially
the difficulties associated with bringing the initiative into reality. Second,
wallet fatigue – due to the increasing cost of the projects. These two issues
often deflect focus and priority from the successful transformation of the business
unit, process or company; thus they are severe threats to success. Establishing
and maintaining a benefits realization program as a strategic priority is required
to keep the proper focus and dedication necessary to push it through an organization.

Realizing Results

Poor communication and failing to recognize realized savings along with the
unwillingness to make tough decisions make actual results a tough hurdle to
overcome. If the focus and push to recognize results are slowed by the tough
nature of the business, realizing results in a timely fashion will be threatened,
potentially calling into question the success of the transformation.

Benefits Realization Enablers

The best way to optimize the ROI of the time, effort and money spent on transformation
efforts is by institutionalizing a formal benefits realization program with
the following attributes.

A formal, empowered governance structure to oversee the benefits realization
program:

  • The governance organizational structure should be composed of senior executives
    with sufficient clout to overcome obstacles on the path to benefits realization.
  • The formal benefits realization program reporting structure should be detailed,
    its role clearly defined and explicitly communicated to the key stakeholders
    in the company.
  • The commitment of the transformation stakeholders should be established
    by linking the success of the benefits realization program to their respective
    incentive plans.
  • The governance structure should be in place until all the benefits have
    accrued.
  • The governance structure should exhibit the qualities and balance shown
    in Figure 1.

Benefits realization program with detailed processes and supporting tools:

  • Processes for capturing, tracking and taking action on underrealization
    of benefits should be clearly identified along with data sources and tools
    that will support these processes.
  • Benefits should be captured in templates and tools on a set periodic basis
    after which, reporting to high-level executives on realization or under-realization
    of benefits must occur.
  • The process should be standardized across all business units to promote
    consistent practice and capturing of benefits.
  • The processes should support the continuous monitoring of the benefits realized
    throughout the execution of the transformation plan and subsequent accrual
    of benefits as shown in Figure 2.

In the utility industry, there are several additional challenges to capturing
savings that need to be taken into consideration. The first concerns the perception
in the industry that regulatory restrictions prevent retention of efficiency
gains, resulting in decreased motivation to target and achieve the savings resulting
from a transformation. Naturally, lack of incentives for middle-level managers
to realize savings from transformation initiatives often reduces commitment
to achieving these savings, making it all the more important that the high-level
management and C-level executives commit to benefits realization as a long-term
strategic priority. Furthermore, that focus must be communicated clearly among
all stakeholders to maintain focus throughout the process.

Focus on day-to-day operations within the utility industry can also have an
impact on benefits realization. Key managers in this industry are regularly
preoccupied with daily, short-term operational issues; thus they lose focus
on the medium- and long-term savings of the transformation initiatives they
had foreseen. For example, noncore utility investments of even a large magnitude
are often given very little attention due to the relatively larger monetary
value of other capital investment programs in electrical assets. In addition,
there is extensive focus on metrics for engineering and operations; however,
financial metrics such as benefits realization are often overlooked. This loss
of focus and priority can also threaten the successful capturing of savings.

Benefits realization also demands that processes be standardized and regulated
by a coordinating governing body. Unfortunately, standardization across internal
business units is often very difficult in the utility industry. Business unit
leaders are typically averse to change and strongly positioned as decision makers
or influencers within utility companies. They often operate separately, focused
on their business areas and are unwilling to consider the advantages of adopting
standard processes across the enterprise. Simply put, benefits realization requires
changes within the often change-adverse culture in utilities. This cultural
characteristic also requires that top management step in with the necessary
vision and clout to push the program toward success.

Do You Need It?

Benefits realization can be particularly helpful if your business finds itself
in one of the following situations. (Note that this is not an exhaustive list,
but a list of common conditions where benefits realization has been shown to
be effective.)

  • Are you redefining your company mission and vision?
  • Does your business have a history of failed IT and investment initiatives
    that have come short of reaching your savings goals?
  • Are you in the middle of a transformation effort?
  • Is your company in serious financial trouble and in need of active cost
    monitoring?
  • Do you have a prescribed method for business case modeling and follow-up?
  • Would you like to be able to effectively track and measure workforce productivity?
  • Do you need more rigorous requirements for reporting of project cost and
    benefits?

As mentioned, most transformation and IT system implementations are not perceived
to produce the economic value touted in the original business case for the project.
How do you prevent your transformation project from adding to these disturbing
statistics? Benefits realization may be the answer.

Putting Performance First

Poor financial performance in 2001-2003 has led utilities to re-examine their
management approach. Utilities that are making it out of the slump have implemented
new management techniques such as Six Sigma, balanced scorecard and centers
of excellence; and others have adopted business process outsourcing (BPO) as
part of their strategy. These companies have also found the need to address
not only financial, but operational performance. And in order to drive execution
of performance goals through the organization, technology plays a key role.

According to Kathleen Wilhide in IDC’s “Worldwide Financial and Business Performance
Management Applications 2004-2008 Forecast Update (July 2004),” business performance
management (BPM) consists of a set of analytic applications and software technology
infrastructure “designed to measure and optimize financial performance and/or
establish and evaluate an enterprise business strategy.” The core applications
within BPM suites include financial consolidation, which supports both statutory
and management reporting; planning and budgeting to support finance and operations;
and scorecarding/dashboard applications to drive not only key performance indicator
(KPI) reporting but strategy execution. Additional areas include forecasting
applications as well as activity-based costing/management supporting cost and
profitability management.

Perhaps the most important applications to support performance execution are
scorecarding/dashboards (scorecard) solutions. These applications support the
tracking of KPIs in alignment with the accountability and strategic goal structures
of an organization. While a portal merely allows access to data and analysis,
scorecards are role-based applications that drive visibility to all authorized
employees, based upon their individual role or accountability, of KPIs and related
strategic initiatives of the organization from high-level plans down to lower-level
action plans. The applications makes visible progress against goals, and facilitate
collaboration that enables employees to document progress or make course corrections
as often as makes sense (see Figure 1).

Let’s look at an example of how a scorecard application drives visibility and
accountability. The scorecard application can report the daily customer average
interruption duration index (CAIDI) indicator for organizational units such
as service center, district office or region. If the CAIDI is out of line with
target and made visible on a timely basis, employees can take action. For example,
a supervisor can authorize overtime to accelerate the restoration process. This
decision can be documented in the scorecard application and made visible organizationally.

Measurement based on objective data from multiple sources keeps organization
on track. Standard reports from single applications are no longer sufficient
to track progress. That leaves many mid level and senior managers to query various
applications and act as manual integrators to put together reports that are
less than timely and fairly subjective. A scorecarding/dashboard approach integrates
information from disparate applications and sources into an information repository
that crosses finance and operations. The whole premise of a scorecard methodology
is balance – looking not only at financial metrics, but also at customer, operational
and company innovation operations as well.

Analytics Are Required

Enterprise application integration and portals have helped utilities make strides
toward increasing visibility to data and information. However, analytic applications
provide process automation that support closed-loop decision support. For example,
the outage of a large generation unit or combinations of smaller units on a
transmission node can have a tremendous impact on the bottom line. At the most
competitive generation companies, high-level executives can see real-time generation
against capacity and drill down to MW production but – and here’s where analytics
are required – the real value is to see production within the context of market
opportunity for the fleet as a whole, and make adjustments accordingly.

Analytics put information in context and should lead to an understanding of
the root cause of a problem. For example, at one utility company the VP of energy
delivery was alerted that his unit was over budget. Drill-down capabilities
allowed him to determine that overtime was high. Business unit managers were
able to drill down further to find that job estimating was off. Finally, investigation
led to the conclusion that designers and construction crews were incompatible,
requiring a change in approach. The real value in analytics comes when these
applications facilitate decisions to make operational changes that positively
impact performance (see Figure 2).

Focus on Metrics

Wall Street tends to dote on financial metrics, such as earnings per share,
debt reduction, and merger potential. But those indicators are extremely sensitive
to operations, especially related to reliability. Witness the decline in regional
utility stock prices just after the 2003 blackout or the downgrades in credit
ratings of utility companies experiencing unusually high outage rates.

That is why utilities focus on operational metrics. For example, Progress Energy
recently announced that it had made significant improvement against its three-year
“commitment to excellence plan.” Among other things, it was able to improve
employee satisfaction and injury rates, improve reliability by more than 20
percent, improve its J.D. Power customer satisfaction rating and increase its
electricity reserve from 15 to 20 percent. These metrics address the employee,
regulator and ratepayer, in addition to the shareholder.

There is an important distinction between enabling operational performance
and enabling better operations across business units. For example, one utility
recently implemented a portal to facilitate outage restoration. Rather than
utilize the user interface of the outage management application, the utility
installed a portal that provides Web access to dispatchers, call center representatives,
customer contact, field crews and governmental affairs personnel on outage restoration
status during storm or other emergency conditions. While this visibility means
fewer calls to the dispatch center and speedier restoration, it does not display
performance or evaluate impact on the corporate bottom line. Many utilities
have gone as far as enabling better operations; few have gone as far as enabling
operational performance.

People and Process

There is a great deal of process work behind developing the right metrics.
It is part analysis of existing sensitivities, part negotiation. This is best
achieved by active communication between senior executive and mid level management
working groups. Rather than an ad hoc approach, a governance structure is required
because the object is to achieve continuous process improvement. According to
one CIO, “If you ever think you are done, you are not.”

Business units must do the hard work of coming up with a meaningful set of
limited objectives directly tied to corporate goals. Too many KPIs lead to a
lack of focus. According to one utility executive, “Effective results can be
obtained by developing a hierarchy of key measures linked to summarylevel metrics
that cover large work groups. But lower-level metrics help teams and individuals
see the linkage between their operational performance and business results.”

Metrics displays are only window dressing if there is not an incentive to use
them. For one utility company, BPM is so useful that plant operations personnel
are using it in their daily meetings. Another company has found that bonuses
or other incentives tied to performance metrics are what motivate individuals
and workgroups and lead to progress against goals (see Figure 3).

A Balanced Scorecard

One utility has taken a balanced scorecard approach since the late 1990s. It
replaced a previous effort that focused on making KPIs visible to senior business
executives. The company started by looking at all of its operational metrics,
including those that were team or individually focused and those that were locally
managed. Teams culled those into business unit level and enterprise dashboard
metrics. Group and individual incentives tied to these metrics were added to
ensure that the organization was actively moving in the right direction.

For the IT unit alone there were hundreds of operational measures and dozens
of service level agreements with service providers and outsourcers. These were
culled to measures underlying the IT performance index:

  • Operational performance;
  • Planning and business unit alignment;
  • Customer satisfaction (in this case the internal customer); and
  • Performance versus budget.

Employee bonuses and other incentives are tied to:

  • Corporate financial results;
  • Safety; and
  • IT operational performance.

IDC has found that tying performance management efforts to compensation is
a best practice, but many companies are reluctant to do so – the culture shift
is a challenge, and organizations must be ready to defend the consistent calculation
of KPIs.

Since 1999, this utility has improved both financial and operational performance,
growing earnings per share by 70 percent. Earnings growth is attributed to a
combination of factors, including increased sales, positive rate actions, effective
management of operations and maintenance costs, and accretion from share repurchasing.
Although no formal study has been made correlating the balanced scorecard strategy
with these results, it is not a stretch to conclude that implementation of BPM
technology is making a significant contribution. A recent study by IDC showed
that BPM initiatives earn an average ROI of 129 percent – and when organizations
taking part in the study correlated balanced scorecard initiatives to the improvements
in the measures that they surfaced through the process, which were operational
in nature, the ROI in many cases was far higher than the average.

Reaching for Understanding

Energy Insights has predicted that in 2005, utilities will get serious about
business process outsourcing. According to Energy Insights’ “Top Ten Predictions
for the Energy Industry in 2005,” released in January 2005:

Interest in BPO will be driven primarily by financial pressures including
the need to achieve M&A synergy savings and the friction between a “back to
basics” strategy, which decreases the ability to grow revenues, and earnings
growth objectives of 5-10 percent, requiring a sharp focus on cost cutting.
Leading indicators include … [the] record setting deal between TXU and Capgemini
… valued at $3.5 billion over 10 years, and the 10-year deal between Entergy
Solutions and Alliance Data Systems. … BPO will become the norm instead of
the exception, especially in shared services such as finance, HR, supply chain
and IT as well as for customer service and billing.

A utility cannot truly evaluate BPO without an understanding of its own performance
metrics. BPO vendors offer business process outcomes. To structure a deal, at
a minimum, a utility will need to understand what outcomes are desired and if
they reduce costs through outsourcing those processes. BPO vendors also offer
hundreds of measurement possibilities; however, only the utility will know which
one of these will have significant impact on performance. For example, BPO vendors
may measure the cycle time to produce a utility bill. But does reduction in
cycle time really matter to the bottom line?

Once committed, BPM is required to monitor and manage the outsourcers’ performance.
In the case study above, the IT performance index links not only CIOs (business
unit and shared service), but IT service providers and BPOs.

The Technology Is Mature

There is a wide range of products on the market that support BPM. Most vendors
have a suite of BPM solutions that integrate financial consolidation, planning/budgeting
and scorecards/dashboards. The niche vendors with a BPM/BI heritage include
Hyperion, Cognos and SAS accompanied by enterprise resource planning (ERP) application
vendors such as SAP or Oracle/PeopleSoft that have also developed a BPM suite.
PerformanceSoft is a vendor that successfully delivers only scorecards/dashboards.
Utilities also use portal products by Plumtree along with homegrown analytics
to build their own dashboards. Operational scorecarding/dashboards in the industry
includes vendors such as Obvient and SAS.

Recommendations

It is an old saw that to be successful, initiatives need corporate backing,
but this is one initiative that will not work without corporate initiation and
ongoing commitment. IT can certainly support this effort with technology. IT
can also act as leadership in this realm, as it is perfectly situated by virtue
of supporting a majority of the business units in a utility company.

For BPM to work, utilities must focus on the following:

  • Focus on the cultural aspects of formalizing BPM initiatives first – technology
    can’t make it happen without management support and a cultural shift.
  • Invest in information technology, especially BPM, but take a staged approach;
    perhaps piloting a solution initially in one business unit such as delivery.
  • Use business process modeling to support the process of defining metrics
    and process re-engineering.
  • Get the organization laser-focused on a few key corporate priorities and
    the operational metrics that support them. Too many key performance indicators
    lead to a lack of focus.
  • Information integration is the hardest part of any BPM initiative, but is
    key to providing the analysis needed to effect decision making. Work toward
    presenting drill-down capabilities into further detail on what is being measured,
    not just for executive management, but also for those responsible for operations.
  • If a single ERP system is prevalent, BPM applications deliver a business
    intelligence architecture that facilitates integration across applications
    and can be a good first step. A more robust BI strategy may be in order to
    integrate information from many disparate systems.
  • Continue to push vendors to develop analytic solutions specifically designed
    for the utility market – or verticals within that market.
  • Enable push instead of pull. Employees will visit a report center only when
    needed. According to Henry Morris, group vice president and general manager
    for integration, development and application strategies at IDC, “Web-based
    enterprise portals are ideal environments for displaying a dashboard and delivering
    additional contextual information such as relevant news and access to documents.
    Since portals define user access rights in terms of roles in an organization,
    different displays can be directed to different types of users.”

The LNG Question

During 2000, natural gas prices in the United States took a longanticipated
turn, and market watchers are now waiting (at the time of writing) for mean
reversion to happen. And waiting, and waiting. No, all energy traders did not
collude to bring about this turn of events, not even the so-called Texas Cartel.
And no, the prevailing price deck does not mean we are “running out of natural
gas” or that we do not still have a rich, deep North American resource base.
Rather, the price deck we see today, and which will likely last a bit longer
(with some adjustment if oil cools off), is a logical result of long-term business
cycles in the natural gas industry, the progression of North American resource
development and patterns of policy/regulatory treatment for what is now considered
to be a premium fuel and feedstock. The upshot is that at a time when supplies
in both the lower 48 states and Canada are mature and maturing, and with natural
gas consumption at least stable (demand growth need not even be a factor), imported
liquefied natural gas (LNG) is increasingly looked to as the balancing item
for the entire continental marketplace.

How did we get to this juncture? The Natural Gas Policy Act of 1978 was an
acknowledgement of pent-up demand (notwithstanding an alphabet soup of policies
meant to discourage natural gas use during a time of perceived shortages and
curtailments) and policy-induced supply imbalances. The resulting strong price
signals for new exploration and production unleashed a wave of drilling – not
just in the lower 48, but also in Canada’s huge Western Sedimentary Basin. Collapsed
oil prices and a pervasive “gas bubble” stymied resource development and led
to extensive upstream industry consolidation (see Figure 1).

De-bottlenecking the interstate natural gas pipeline grid with the U.S. Federal
Energy Regulatory Commission’s final restructuring rule, Order 636, implemented
in 1992, capped several years of rule making to achieve broad, efficient market
clearing transactions between independent buyers and sellers along a common
carriage pipeline tollway. With similar, indeed front-running, initiatives by
Canada’s National Energy Board, a common market for natural gas – the largest
in the world – evolved along the Canadian and U.S. border. The Clean Air Act
Amendments of 1990 and the Energy Policy Act of 1992 enshrined natural gas as
a cleanburning fuel for electric power generation and extended the argument
that a wholesale market for electric power was the regime most appropriate for
a wholesale-market-driven natural gas industry system.

The Supply-Demand Question

Today, roughly 16 percent of the natural gas we use comes from Canada. Almost
3 percent of our natural gas consumption is supplied by imported LNG through
four operating terminals (Everett, Mass.; Cove Point, Md.; Elba Island, Ga.;
Lake Charles, La.). We export Alaskan natural gas production in the form of
LNG from the Cook Inlet to Japan, a trade route established in 1969 and one
of the first in the emerging worldwide LNG industry. We also export to Mexico,
via pipeline, an amount of natural gas roughly equivalent to what we import
as LNG. Demand for natural gas is growing in Canada for applications like electric
power generation but also, importantly, for the expanding production of crude
oil from oil sands in the province of Alberta. And demand for natural gas has
grown, and will continue to grow, in Mexico, for industrial and domestic use
and electric power generation, a consequence of Mexico’s economic emergence
and growth. Meanwhile, a number of factors at play are making outlooks for natural
gas supply-demand balances in North America uncertain, at best, contributing
to bullish forward prices and underscoring the role LNG may play in the future.

Policy constraints inhibit development of what many believe to be a large,
relatively untapped, resource base in Mexico. The body of law associated with
Mexico’s Constitution restricts exploration and production activities to Petroleos
Mexicanos (Pemex), the sovereign oil company. Pemex has been chronically underfunded
as income from oil sales is routinely diverted to Mexico’s general treasury.
The company has been in search of mechanisms to increase the level of private
investment in its upstream businesses, but a complex array of political and
social factors have kept success out of reach thus far. As a result, Mexico
finds itself becoming an LNG importer as decision makers search for supply alternatives
and portfolio diversification, and a way of making the natural gas marketplace
more price competitive.

The engine for Canada’s natural gas pipeline exports, the Western Sedimentary
Basin, remains an important component of North American supply. Drilling is
moving into new, deeper and thus more expensive terrain. As in the U.S., exploration
companies are delving into Canada’s “unconventional” resources, mainly coal
seam gas. A frontier many hoped would yield new, prolific fields, offshore Atlantic
Canada, has proven to be complex and disappointing. The potential for resource
development offshore British Columbia remains just that, as public dialogue
on environmental and community impacts struggles on.

What of the lower 48? If not for development in our own coal seam gas plays
and the hot Barnett Shale play in Texas, the U.S. would be hard-pressed to show
meaningful gains. The unconventional natural gas resource base in the U.S. encompasses
about one-third of our prospective future domestic supply. Technical advances
must continue to be made for commercially successful operations in coal seam,
tight sands and other plays; what seems to work well in one basin and play does
not easily transfer to others. Progress has been made in Gulf of Mexico deep
water offshore exploration and production and in building the pipeline fairways
needed to link deep water blocks to the onshore pipeline grid. But “oil proneness”
among the deep water blocks really means that companies are not yet in the “gas
window.” To get there means deeper drilling in deep water, something that few
operators seem willing to take on. Meanwhile, older, mature shelf production
continues to decline and offshore moratoria continue to prohibit drilling in
the eastern Gulf of Mexico and off the east and west coasts. Access to public
lands is often touted as a barrier to resource development, but access to private
lands is also a factor. Alaska remains a prime target for growth in both oil
and natural gas resource development, and the outlook for natural gas pipeline
transport from the North Slope looks increasingly positive. Both Alaska and
Mackenzie Valley pipeline projects will constitute major, multibillion dollar
investments of the like that have not been seen for some time.

Unknown and almost unknowable are factors such as changing preferences for
energy as future demand is modified by expectations about real energy prices
and costs. Likewise, we can’t fully anticipate the intensity of fuel competition
if higher natural gas prices persist. Weather is a significant short-term variable,
but longer-term temperature patterns are also important. The myriad viewpoints
on global warming notwithstanding, we are on pace for a typical, roughly 30-year
cycle in heating degree days. We face the risk of an increasing incidence of
periods when colder weather results in sharp calls on natural gas inventories.
Nothing galvanizes public and political attention on natural gas more than a
voting bloc of consumers freezing in the dark. The global warming debate has
generated complacency about complex short term and long-term weather and climate
patterns and cycles. These trends likely will do more to affect public viewpoints
and debate on energy in the coming years than any of the theories scientists
might put forth.

Enter LNG?

Given the driving forces that prevail around natural gas supply-demand balances,
the prospect of expanding LNG import receiving capacity has gained great attention.
Excluding Everett, the remaining three existing import facilities are in the
process of, or planning for, receiving and storage capacity expansions. About
50 new projects throughout North America, and including locations like the Bahamas,
have been proposed, planned, are moving through the regulatory permitting process
or are beginning to be constructed. Many of these projects are being pursued
by familiar names – ExxonMobil, ChevronTexaco, Shell, BP, BG, ConocoPhillips
– all have developments under way. New entrants include names such as Freeport
LNG, Cheniere Energy, and Calhoun LNG as well as established energy utilities
such as Sempra, Florida Power & Light and Atlanta Gas Light.

New thinking is permeating the industry. At this writing, Excelerate Energy’s
Energy Bridge, a combined LNG ship/onboard re-gasification technology hatched
out of El Paso Energy and developed by Exmar, was delivering its first 3 billion
cubic feet of cargo about 100 miles from the Louisiana coastline. Excelerate
and competitors, including Tractebel, which operates the Everett terminal in
Boston, are looking to use Energy Bridge or similar approaches off New England.
Most of the international oil companies have offshore LNG facilities on the
drawing boards that would receive, store and re-gasify LNG for shipment via
offshore pipelines to onshore customers. Freeport-McMoRan is hoping to utilize
salt cavern storage of re-gasified LNG that would be received and stored on
an offshore platform. In Mexico, onshore facilities at Altamira, Tamaulipas
on the Caribbean (Shell and Total) and Baja California (Sempra and Shell) are
under construction. Expectations are that Irving Oil and Repsol will develop
a receiving terminal in St. John, New Brunswick, while Anadarko, TransCanada
and others compete in Nova Scotia. LNG from Atlantic Canada and Mexico projects
will principally serve customers in the U.S. or offset scarce North American
domestic production.

The dynamism in this early wave of new LNG developments means that any of these
developments could be out of date by year’s end.

During this initial period of announcements, positioning and early development,
certain realities have set in with respect to where new projects might be located,
how they will be operated and managed, and whether they can achieve commercial
success. A large number of “first entrants” (new greenfield projects under development
now as compared to the established receiving terminals) will be along the upper
Gulf of Mexico. A friendlier development climate and strong cluster of downstream
petrochemical buyers has provided an edge to the “third coast.” With the decline
in natural gas production along the shelf, pipeline capacity is available to
carry natural gas from LNG projects to customers further north, or to displace
Gulf Coast onshore production from local use. In some instances, building new
or expanding on existing LNG import receiving and storage capacity will be the
easy part, relatively speaking. More difficult will be the necessary expansions
or additions to natural gas pipeline capacity, including disputes about pipeline
rights-of-way.

All of these realities are grounded in a common theme: whether, where and how
the consuming public and our political representatives accept new LNG projects,
or any new energy infrastructure projects for that matter. LNG developers have
abandoned locations where public opposition caused progress to grind to a halt.
They can take some solace from public resistance to an array of innovative energy
proposals including renewables (stated preferences for wind power apparently
are insufficient to overcome opposition to some projects). Indeed, public resistance
to locating and building new energy infrastructure in the U.S. and many other
countries may be one of the more compelling but least understood and therefore
most poorly articulated issues of our time.

Soft Risk or Hard Risk?

If one engages in a thought experiment on 9/11 and its aftermath, it is possible
to imagine that the conspirators planning this abomination cared less about
the direct effects than the broader and longer-lasting psychological impacts.
It’s possible to imagine months and years of careful study of the U.S. economy
and populace, monitoring demographic and political trends to assess everything
from geographic shifts in where we prefer to live (overwhelmingly along our
coastlines) to how we derive economic wealth (our casual use and enjoyment,
apart from the typical daily aggravations, of our immense infrastructure systems)
to our lifestyles (the “what, me worry?” here-today, gone-tomorrow attitude
that permeates American society as soon as the first-borns of new immigrants
are assimilated into the melting pot). It’s all possible to imagine, but let’s
not give too much credit here. Rather, 9/11 happened at a time in history when
an assortment of other factors was already at work, with the overall result
being that we are trying to build critical new infrastructure in a most inconvenient
context.

LNG has the added hurdle of being less familiar than the corner gasoline station.
And given that even this icon of American life is not well understood, LNG is
quite disadvantaged. The U.S. has more LNG facilities than any other country
– more than 100 small storage and peak shaving facilities where domestic natural
gas production is routinely stored for seasonal use apart from our coastal import
terminals. An irony, not missed among thoughtful people in the energy arena,
is that when it comes to new approvals for new import terminals we seem to prefer
our LNG facilities inland. A convergence of conflicts among multiple uses in
bays, ports and harbors, along with other demographic and political transformations,
coastal urbanization and the post-9/11 psyche, combined into the stew most often
referred to as NIMBY or “not in my backyard,” among many other variants.

But this is way too simple and cavalier a diagnosis. The phenomena at play
are much more complicated, and in teasing out the threads in order to identify
and implement a cure, or at least to get through triage, developers and government
regulators need to consider that it is all about local control.

Our legal and regulatory tradition for more than three decades has been to
enable local public input related to new projects. The public interest is legitimate
and, in the long run, project developers and operators benefit hugely from good
will that comes with respect for the public interest. As canny political operatives
know, however, the public interest is ephemeral and a construct of rent distribution
– regardless of whether the economic rents are monetary or whether they are
less tangible benefits associated with projects. The end result is a regulatory
process, starting with the National Environmental Policy Act and going well
beyond, that has served the public interest by enabling input and justifiable
opportunities for rentseeking behavior in order to get projects done.

In political settings, control of agenda is always key and locating and obtaining
permission to build new infrastructure is nothing if not an exercise in politics.
In this age of democratization, control of agenda is even more sought after.
Indeed, control of agenda is, now, often the end game. It is no longer adequate
to merely have the right to participate. Instead, activists desire to frame
the decision parameters. Given that opposition moves at the speed of light,
literally, activism against projects may be well established before the project
is even announced. Internetspeed communication coupled with desire for agenda-setting
control has yielded a blossoming of entrepreneurial initiatives and individuals
who can dominate news media and run circles around companies and government
agencies that tend anyway to be process-bound and focused on public input and
rent distribution.

Put this way, the problem of public acceptance is divorced from anything that
could be construed to be particularly American. The dynamics become broader
in scope, and seemingly disparate instances of local opposition begin to look
like they might have more in common. Even remote locations – for instance in
developing countries where less familiar languages and cultures would appear
to imbue truly distinctive characteristics to the problem – can be reduced to
the question of agenda control.

What to Do?

Business and government are quite rational to be cautious about giving up ground.
Costs for companies to develop projects through elaborate multistakeholder arrangements
in which agenda control is relinquished can skyrocket. Managers may be proud
of such schemes, although they are often less successful than advertised and
they rarely guarantee long-term success. The risk grows that by the time these
arrangements are put into place, the window for commercial success is closed;
alternative sites, where host communities are more accepting of the project,
or less able to oppose it (the true source of environmental injustice), will
inevitably crop up and often win out. At minimum, managing such an endeavor
is time-consuming and commands skill sets not widely available. For all the
attraction of “corporate social responsibility” and implications from the push
on CSR for how energy companies approach and develop new infrastructure projects,
it is usually neither practical nor satisfying to anyone (except perhaps the
publisher of the annual CSR report or the many consultancies and organizations
attempting to operate in the field). For their part, government institutions
are rarely strong enough or deep enough to embody solutions that deviate far
from familiar processes.

The end results of the problem of public acceptance and agenda control are
never discussed, and perhaps this is where things ought to begin. An isolated
case or two in which projects fail and alternatives are not found, or found
only at higher cost, are not worrisome. Consistent failure and substitution
of higher-cost alternatives, in the aggregate, impact all energy customers,
most of all those for whom energy is a greater portion of their disposable income.
This means the worst of all worlds – frequent occasions of environmental injustice
combined with increasing costs of energy for the most sensitive segments of
a society. Across an economy overall, the end result is, inevitably, higher
energy costs and lost competitive advantage.

The Way Forward

Americans are often accused of being addicted to cheap energy. Well, it’s true,
and it has been a powerful force in development of our modern national economy.
In fact, the power of the U.S. economy is precisely that energy costs, in real
terms, have tended to grow at lower rates than our gross domestic product (see
Figure 2).

Even during those periods when real energy costs skyrocketed, the U.S. economy,
in total, weathered the storm in amazing fashion. Our flexible, market-based
economy enables – actually facilitates – adjustment, substitution and technical
innovation. Hard work has been done to introduce market forces to the energy
sector, and many positive gains have been achieved creating options and opportunities
that we have not had previously.

Yet it is easy to look at history and forget that we have lived a long time
off of infrastructure systems that in many cases are easily a century old. We
will, and must, continue to operate these systems while at the same time expand,
improve and add to our infrastructure base. LNG will be an important component
of our natural gas, and overall energy, supply portfolio. New LNG projects will
be successfully developed – they already are – and provide a bridge to whatever
the future holds. And what does the future hold? Who knows? The point is to
get there.