Information Is Power: The Intelligent Utility Network

In the United States, utility industry
forces are driving substantial changes
in strategy for T&D businesses. The
U.S. Energy Policy Act of 2005 mandates
improved infrastructure and reliability.
Energy costs are rising. Higher energy
prices overall reduce consumer energy
demand and potentially pull down utility
revenue. Smart metering and demandresponse
programs are expanding to
satisfy customer demands for conservation
opportunities. Rate cases to cover
increases in nonenergy-related costs are
becoming more difficult to push through,
making operational cost reductions more
important.

The lack of investment in aging T&D
assets over the past 30-plus years
has resulted in major reliability and capacity
problems in certain regions. Significant
staff retirements over the next 10 years
will remove critical knowledge and experience
from utilities if not addressed.

The utility industry seeks new ways to
overcome these challenges. Investments
are being made by utilities in automation,
remote monitoring and control systems,
metering, analytics, communications and
IT infrastructures, and the digitization of
many of their processes. A better level of
connectivity and observability across the
electricity supply chain is being sought.

To achieve this, one must have common
standards. Industry standards promoting
connectivity continue to gain well
deserved attention. There are many industry
groups actively working on data model
standards and communications standards
such as found in the ICA, IEEE and IEC.

Industry bodies such as the IntelliGrid
Consortium are moving ahead with fast
simulation and modeling, distributed
energy resources and consumer portal
projects. The GridWise™ Alliance is progressing
with grid-friendly appliances/
price-sensitive control systems. Other
efforts are emerging, boosted by the
Energy Policy Act of 2005.

All of these factors and more are driving
the era of the digital utility. Many in
the industry are seeing the need and the
value for these investments. The utility
industry is starting to embrace its digital
age and learn from the successes and
failures of other industries that have digitized
and automated before them. After
all, information is power, which has always
been the case.

Recent Developments

Intelligent grid or smart grid strategies
are emerging in the electrical utility
industry. These strategies are used to
align and optimize grid-related investments
across the utility within a common
framework.

Some Utilities Currently Working on Intelligent Utility NetworksThey are aimed at the development
of an intelligent utility information network
(IUN), which enables more realtime
operational intelligence, connectivity
and observability further down into
the grid and across the electricity supply
chain. This allows the utility to achieve
greater reliability and efficiency from their
assets and operations and provide a better
quality of service to their customers.
Figure 1 gives an overview of some utilities
and their respective investments in
their modernization. There are a growing
number of utilities investing in the development
of an enterprise strategy. Within
the context of the strategy, investments
in one major functional area can be incrementally
increased to gain further return
for other functional areas in the utility.
For example, utilities investing in AMI are
taking a hard look at their communications
infrastructure strategy. They assess
whether there could be an incremental
investment made in communications that
might benefit other functional needs.

A well-designed and well-built intelligent
utility network can produce a broad range
of strategic and operational benefits for
the utility and its customers, depending
on its focus and the business priorities of
each utility. An IUN will benefit different
utilities in different ways. It is not a onesize-
fits-all solution.

Some common characteristics of an
intelligent utility network are listed below:

  • Increased use of automation and digital
    technologies to continue to improve
    reliability, efficiency and service;
  • Functional area process and technology
    investments made as part of common
    interlocked utility IUN strategy;
  • Common information architecture, IT
    and communications infrastructures,
    common processes and common
    standards across the utility;
  • Common governance models required
    to manage IUN investments;
  • More real-time grid observability smart
    sensors, monitors and meters;
  • Tighter linkage between customers,
    assets and grid operations with
    increased customer control, services
    and options; and
  • Increased use of analytics for decision
    support and automation.

Major Blocks of an Intelligent Utility NetworkAn intelligent utility network can be broken
down into five major blocks, as shown
in Figure 2. These blocks are: grid equipment
and sensors; communications infrastructure;
IT infrastructure; information
systems; and analytics.

An IUN is the network through which
the monitoring, analysis, control and management
of many of the functions of a
utility will occur. It is the network through
which the flood of field monitoring data
streams are channeled, stored, combined,
analyzed and transformed into actionable
information streams and then channeled
to the appropriate person or application in
order to support timely decision making.

An intelligent utility network enables
the ability to supply the right information,
to the right person, at the right place, at
the right time – in a more standard, common,
cost-effective and organized fashion
and can provide a higher level of observability
over the entire electricity delivery
supply chain system.

Return on the Intelligent Utility Network Investment

The most important element to manage in
a utility is information. It is this information
that is used to manage the grid, the
assets and service to the customer. All this
has to be done in a manner that meets the
requirements of the utility leadership, the
regulators and the shareholders. It is used
to achieve improvements in operations,
efficiency, reliability and service.

It is critical to be able to access, analyze
and control information, within time
frames that are required for a utility to
manage and operate a real-time electrical
grid. This level of challenge and coordination
can only be achieved through a common
intelligent utility network that connects
to all parts of the utility.

If each utility function were allowed
to automate and digitize independently
and make investments in technologies
and process improvements as it saw fit, a
utility would end up with its next generation
of functional silos with some vertical
benefits achieved but little benefit across
the utility as a whole. A major benefit of
an IUN is the ability to reuse and leverage
these investments across all the functional
areas, as they will be based on common
rules, governance, standards and
infrastructures.

IUN Outage Restoration ExampleThere are many examples of IUN benefits
and the value of how information can
be used across a utility. Figure 3 depicts a
scenario relating to an outage, real-time
asset monitoring and how the initial data
from the field is used by the various utility
functions and the actions taken and
benefits gained.

The IUN allows the utility to relate
real-time asset health to grid operations
through the development of equipment
condition monitoring. Real-time knowledge
of asset health allows the ability to
sweat the assets while controlling operating
risks. Strategies can be employed to
increase asset life through better management
and maintenance. With better information
regarding risk and return, capital
and O&M spending can be optimized.

From a workforce management perspective,
one obvious benefit is reduction
in frequency and duration of site visits
through remote monitoring and configuration.
When crews must be dispatched,
as in the case of an outage, sensing data
helps to pinpoint the location and cause,
allowing crews to be better prepared and
informed. With a common infrastructure
for utility applications and communications,
functionalities are common, data is
input one time and re-used many times.

Access to accurate historical operations
and asset data improves grid planning.
Capital expenditures can better be optimized
across the grid, allowing the utility,
in many cases, to defer or minimize capital
investments. More accurate design and
sizing decisions for equipment to meet
demand growth can be made.

The operator has fewer system blind
spots, thanks to intelligent devices, sensors
and meters. Faster detection, determination
of cause and localization of
outages is possible utilizing sensing data
and analytics. Load can be better balanced
and stability maintained. Power quality,
reliability and fault issues can be located
before they impact customers.

The intelligent meter is a portal to the
consumer. It provides a profile of customer
usage. Connect-and-disconnect as
well as load control can be accomplished
remotely. Time-based rates are enabled.
The operator has another intelligent sensor
on the grid. For the customer, this
means more choices about price and
service, less intrusion and more information
with which to manage consumption,
cost and other decisions. This is certain to
make regulators happy.

Planning and Development

A strategic focus should be applied. A
comprehensive approach to the development,
support and validation can yield a
blueprint for the development of the IUN.

Stage 1: Launch
The strategy is the end state – not the next
step. Pursuing incremental steps without
the benefit of the bigger picture can lead
to fragmented, suboptimal solutions. Conversely,
utilities often are overwhelmed
by the enormity of the transformation and
abandon it. Implementation can be incremental
and spread over time, as long as
each step is a part of the larger strategy.

The key to strategy development is
to focus on how the intelligent utility
network can enable your T&D strategy;
then determine which capabilities will be
required to achieve the strategy. Considering
these required capabilities, what are
your capability gaps? Finally, what are the
enablers for addressing these gaps and
establishing the required capabilities?
With these insights, the utility can establish
strategic goals, along with process or
investment strategies.

A road map development is an iterative
process with four steps:

  • Road map development starts with
    consideration of the “as is” state of the
    utility, with respect to the five blocks
    of the IUN as previously discussed, as
    well as organizational structure, utility
    constraints, data flow models, current/
    planned projects, current standards and
    governance models in effect;
  • A development of the “to be” state and
    the gap from the “as is” state determine
    the high-level applications, timeline,
    architecture and design specifications,
    based on technical requirements,
    resource availability, constraints and
    desired benefit timing;
  • Costs and benefits are then estimated
    based on equipment/labor costs and
    the timing of benefits realization; and
  • Finally, costs and benefits drive the
    business cases for the implementation
    options to build out.

The portfolio of business cases needed to
support the realization of a strategy should
meet the needs of four key stakeholders,
and can take several months to complete.
Senior management and Wall Street will
focus on ROI and financial risks. Internal
utility functions need to see how the IUN
will provide them with benefits so each can

be convinced to provide an appropriate
share of the funding. Customers should
understand how it might provide them
with improved service, increased reliability
and new products and services options.
Regulators will focus on increased reliability,
capabilities for time-of-use pricing
and other new pricing options, and higher
customer satisfaction.

Stage 2: Pilot and Validate
Pilot projects are used to validate and
mitigate technological, system and project
risks associated with the development of
an intelligent utility network. Pilot projects
are also used to better validate costs. The
pilot projects can reach from a single proof
point to the implementation of part of an
IUN to a limited small-scale deployment,
for example, in a neighborhood. The types
of piloting required depend on the flavor
of the intelligent utility network being
planned by the utility. It could be asset-centric,
customer-centric, operations-centric
or all three. These types of projects are
also a very good means for demonstrating
benefits to employees, management, customers
and regulators.

The utility should establish a formal
benefits realization framework and governance
structure in the pilot-and-validate
stage and keep it in place throughout
execution, to provide the governance, processes
and reporting needed to drive the
business case benefits.

Stage 3: Align
It is imperative to selectively transform
your processes and organization to align
with and take the maximum advantage
of the availability of an intelligent utility
information network while it is being
built out across the organization. Do not
underestimate the planning and efforts
required to manage such change in the
organization. Change management is a
very significant part of developing a strategy.
When employees are made part of the
design of the intelligent utility network and
embrace its information riches, then it will
be a success.

Stages of an IUNStage 4: Execute
Execution builds on the pilots with a series
of projects that are carefully planned,
sequenced and coordinated based on the
road map. Figure 4 illustrates the complete
process from launch through execution.
The big-bang approach will not work;
this is evolution, not revolution. Careful
road map development and project management
is essential. Pilots will resolve
uncertainties and doubts. Benefits realization
will ensure that business case commitments
are attained. Change management
will assist in driving the necessary
transformation.

Conclusion

The intelligent utility network is now
becoming a reality. More and more
utilities are developing and implementing
modernization strategies. Government
and regulatory entities are embracing
the IUN as a means to mitigate growing
energy costs. Soon the intelligent utility
network will be the standard model for all
operators to meet.

The change will be transformational
and essential. To address the imminent
challenges of rising energy costs, aging
infrastructure and increased demand for
reliability, the electrical utility industry, like
other industries before, will adopt automation
and digitization in order to continue to
improve reliability, efficiency and service
to its customers.

The Liability of Cost Cutting: A CEO’s New Challenge

When financial results don’t meet
expectations, senior executives
face five dreaded words that
have become much too common: “We
need to cut costs.” Those expectations
might have been set according to shareholders’
requirements, past performance
or metrics that seemed attainable at the
time. Explanations aren’t always crystal
clear. Regardless, the mandate goes
out for every department to “do their
share,” whether by cutting travel, training,
maintenance, engineering, marketing or
people. Those who hit their targets are
heroes, while the naysayers are labeled
as whiners.

Budgets are set; reality sinks in later.
Lower-level managers wonder how
they can run the business with reduced
resources, but they have no choice.
They’re forced to cut costs in order
to meet their tightened budgets. The
accountants are happy. But wait – the
year’s end brings an unpleasant surprise:
The top-line growth, revenue or profit
is not there. The senior executives are
scrambling for answers; they did not
even hit last year’s revenue. The pipeline
is weak and market share is falling. The
knee-jerk response is to enact more cuts,
and even deeper this time.

It’s said that the definition of insanity
is doing the same thing repeatedly and
expecting a different result. Unfortunately,
not only does this scenario play
itself out in every major industry, but it
is considered to be “prudent and responsible”
managerial conduct.

While cost cutting can be disappointing
for the company’s bottom line, there
may be even more dire consequences. As
illustrated by the October 2006 Chemical
Safety Board’s findings related to
British Petroleum’s (BP) 2005 Texas City
Refinery Explosion, serious operational
disasters can result from cost-cutting.
The explosion killed 15 people and injured
180. BP has set aside $1.6 billion in claims
to families alone. Despite the company’s
protestations that it did not understand
the basis for some of the safety board’s
claims, an 11-member panel led by former
Secretary of State James Baker concluded
that, according to media coverage
of the news, “Cost-cutting at BP was
brutal. At the Texas City refinery, total
maintenance spending fell 41 percent from
1992 to 1999, while total capital spending
fell 84 percent from 1992 to 2000. On top
of those cuts, BP challenged its managers
to reduce costs an additional 25 percent
after the company’s merger with Amoco
in 1998.”

In the wake of the Baker report, BP’s
CEO John Browne announced that he
would retire 18 months early. He had been
the leading proponent of the company’s
entrepreneurial culture and decentralized
philosophy. BP also issued a statement
that laid the lion’s share of the blame for
the explosion on lower-level workers and
supervisors. However, the Baker report
found that “BP’s refineries are understaffed
and that employees did not report
accidents and safety concerns because
they feared repercussions or thought the
company would not do anything about
them. Audits were focused on making sure
that refineries were legally compliant,
rather than ensuring that the management
systems were making the refineries safe.”

Why did BP make management decisions
that put the company at such risk?
Did it think it was doing the right thing
by acquiring old refineries, consolidating
and gaining “synergies”? What numbers
could have been generated to promote the
belief that cost cutting was prudent, necessary,
even possible?

Certainly the BP example is not an isolated
incident. From Ford to Pfizer, cost
cutting is back in vogue on a grand scale.
But these measures sometimes compromise
the health of the enterprise and
its long-term viability. In the future, will
executives and board members be held
criminally liable for these types of decisions,
as they have been with other financial
improprieties? Are there other ways
to satisfy shareholders that don’t involve
risky cost cutting?

The answer is, yes, there is, but it
requires a new paradigm for the modern
enterprise. This article will explore
the weaknesses in current approaches
to financial accounting and offer a new
strategy for making better decisions.

The Limitations of Financial Acounting for Decision Making

Let’s assume that the capabilities of people,
equipment maintenance, technology,
supply chain and distribution channels
are integral to the health and viability of
the enterprise. Could these costs we are
cutting actually be investments with a
value and, therefore, a return (ROI)? Not
according to financial accounting, they
aren’t. These expenses are generally
considered “costs.” Although the Financial
Accounting Standards Board and
the International Accounting Standards
Board have several projects directed at
formulating a new conceptual framework
for financial accounting and for handling
“intangible assets” such as those listed
above, the reality is that rules change
very slowly. Modern financial accounting
was born in a time when financial and
physical capital were sources of competitive
advantage. Now estimates show that
80 percent of the U.S. economy’s value is
derived from intangible assets and largely
unaccounted for by financial accounting,
according to a February 2006 Business
article.

Financial accounting practices are
based on theory. The mark of a good
theory is that it can describe what is happening,
explain why it is happening and
predict what will happen next. Because
financial accounting takes such a limited
view of a company’s assets, its measures
support the idea that a reduction in
expenses will lead to improvements in
profitability. If, however, these “expenses”
are really investments in intangible
assets, then these reductions are, in fact,
“de-capitalizing” the business. The result
is less ability to create value – less output,
less revenue.

Most operating managers do recognize
that cost cuttings and downsizings can
sometimes sacrifice critical capabilities.
They know that the equipment is not
always being maintained to perform at
its optimal level. They understand the
risk. So why then don’t the numbers back
them up? Are these managers just soft or
can they see something financial accounting
can’t?

The Managerial Challenge – A New Paradigm

Managers need to make decisions based
on theories and numbers that illuminate,
not obfuscate, what is actually happening
in the business. Boards and senior executives
need to provide sound explanations
for those decisions to their employees,
their shareholders and their customers.
This requires a new paradigm – a strategic
framework and analytic techniques
that describe and predict the returns they
will actually achieve with the investments
they make every day (such as human
capital, physical plant equipment, facilities,
and technology such as processes
and IT).

The remainder of this article presents
core concepts of a framework developed
by ProOrbis. It’s usually discussed in the
context of improving productivity (the
asset concept of ROI); however, as risk
and value are flip sides of the same
coin, the framework will be explored
here from a decision-making standpoint
that addresses both risk and value
improvement.

A New Paradigm

Organizational capability is comprised of three core assets.For
an enterprise to understand its tangible and intangible assets, there are three
key requirements:

  • Classification – definition of all investments
    and sources of value in an enterprise;
  • Integration – connection of all elements
    into one integrated model; and
  • Link to Value – a model that produces
    value and can be measured.

By putting assets in this context, the investments and the value they create
are causally linked, giving the model its predictive power. The framework has
some specific definitions of terms and analytic concepts to meet these three
requirements.

First, organizational capability is everything the organization “can
do” with its assets. There are three core assets – physical capital
(PC), including all tangible assets; technology capital (TC), including product
technology, R&D, information technology and process technology; and human
capital (HC), including employees and contract staff (see Figure 1). In this
model, an asset – tangible or intangible – is any productive means
the organization materially controls that can be used to create value. There
are some intangible assets, such as brands, channels, customer relationships,
intellectual capital, knowledge management, networks and the like. These assets
were created by the three core assets in the past, but can be used over and
over again.

Core assets are combined into production functions (see Figure 2) that are
designed to take inputs, such as raw materials, and generate valuable outputs,
called throughputs, which are the company’s offerings (products and services).

Capability transforms inputs into throughputs.Core
assets are not purchased to be resold directly. They are purchased to be used
by the organization to develop its products and services. As a result, they
derive their value from the value of the products and services they are used
to create. Therefore:

Throughput – Input = Value of the Core Assets

This form of valuation assumes the
company is a going business concern, as
opposed to the value placed on assets
when a firm is going to be shut down.
Liquidated assets are only worth what
you can sell them for, not what you can
make with them. Going-concern valuation
involves the combination of assets and
the way they are intended to be used to
create value.

Since the assets in combination create
the value (T-I), then the return on the
asset investments of the enterprise (ROI)
can be calculated:

Throughput – Input = Value of the Core Assets
Investment in Human, Physical and Technology Capital = Return on Investment

Every asset requires life cycle asset management to ensure that assets perform as required for the capability.All assets in an organization are used
either to create throughput or to manage
assets (asset management systems), as
shown in Figure 3. For example, you need
the HR department in order to manage
your human capital. The right number of
people with the right skills for your business
do not simply appear; they have to
be acquired and managed. Therefore
there is no “overhead” or “fat to cut.”
These assets and the asset management
systems that create them also have an
investment and, therefore, an ROI.
Risk, such as has been discussed
in this article, is like negative through-
put.

Risk is defined as the potential
for assets to perform in a way that
destroys value. For BP, lost production,
fines and claims to families were, in
essence, subtracted from the value
created by the refinery. One wonders how
much BP could have invested to avoid $2
billion in destroyed value. What would the
ROI have been on those investments –
10-to-1, 1000-to-1 or 10,000-to-1? Viewing
those cost reductions as de-capitalizations
is what will shift the conversation
from “how much money did we save?”
to “how much value did we destroy?”

Making Better Decisions

Managers can use this paradigm to
improve decision making in several ways.

  • Putting Investment in the Context of
    Value and Risk. This entails reframing
    asset “costs” as investments needed
    to create value and then determining
    the effect on the value (T-I) when those
    investments are reduced. The effect
    will include both value impacts and any
    resulting risk. The new goal will be to
    improve value to investment ratios, not
    just lower costs.
  • Taking a Comprehensive Approach.
    Managers must consider a decision’s
    impact on the entire enterprise, including
    the assets, inputs and throughputs
    (value and risk), to predict a real outcome.
    When faced with any single asset
    reduction, they should consider the
    impact on the other two assets in the
    production function. The intended and
    unintended results will be more obvious.
  • Making Decisions in the Right Order.
    By beginning with value, then formulating
    the needed capabilities and identifying
    the requirements for assets and
    asset management, decisions are fed
    with the proper data. Making decisions
    “out of order” turns an opportunity into
    a constraint. For example, if a manager
    cuts training “costs” (a human capital
    asset management investment), there
    may be a decline in production (capability)
    due to less competent human
    capital (asset). Instead the manager
    should start with the design of the work
    process that delivers the required production
    (capability) and determine the
    requirements for human capital performance
    (asset). From there the manager
    can decide upon the right investment in
    training (human capital asset management
    investment).
  • Making Decisions Transparent and
    Actionable. By articulating the comprehensive
    nature and the causal relationship
    between decision-making factors,
    managers make their rationale more
    easily accessible to broader audiences,
    including shareholders, employees,
    customers and the community. This
    builds credibility for the decision and
    gives managers a command of the facts
    required to execute the change. Transparency
    in decisions and their relationship
    to outcomes creates a clear path
    that people can understand, engage in
    constructively and reasonably support.
  • The Hidden Advantage – Productivity
    for Growth. This article has discussed
    the serious liabilities of mislabeling
    critical investments in organizational
    assets as “costs.” Return on investment
    (ROI) is another word for productivity.
    By optimizing the relationship between
    value and investment, significant
    resources are freed up, as companies
    discover new ways to remix assets to
    create higher-return production functions.
    For industries expecting significant
    growth such as nuclear utilities,
    scarce resources can be redeployed to
    new operations and create opportunities
    for real growth.

The remarkable benefit of following this
paradigm is the way it allows an organization
to solve its intractable problems. Is
it a silver bullet? No, but it’s certainly
clear that the current paradigm is not
working. With everything to gain and not
much to lose (remember our definition of
insanity), this shift in perspective might
be worth exploring.

References
1. DiFrancesco, Jeanne, “Managing Human Capital as
a Real Business Asset,” IHRIM Journal, March 2002;
“Human Productivity: The New American Frontier,”
National Productivity Review, Summer 2000.
2. M andel, Michael, Steve Hamm, and Christopher J.
Farrell, “Why the Economy Is a Lot Stronger Than
You Think,” Business Week Online, February 13,
2006.
3. Cummins, Chip, “U.S. Cites Cost Cuts’ Role in
BP Refinery Blast,” WSJ.com, October 31, 2006.
4. “Cost-cutting led to BP Refinery Fire, Report Concludes,”
PBS Online, November 1, 2006.
5. Timmons, Heather, “Poor management and costcutting
hurt safety at BP, report finds,” International
Herald Tribune, January16, 2007.
6. Cummins, Chip and Mollenkamp, Carrik, “BP’s Next
Slogan: ‘Beyond Probes’,” WSJ.com, February 7,
2007.

Audit Committees Step Up Risk Oversight

When the audit committee of a
publicly traded company sits
down to meet, what’s on the
agenda? What issues are considered
most important?

Across many industry sectors, audit
committees are placing an increasingly
important priority on risk issues, with
regulation, merger and acquisitions (M&A)
and information technology leading the
list of top concerns. Those are the findings
of an in-depth survey of audit committee
chairs and members from 176 companies
across 11 industry sectors, conducted by
Ernst & Young. Among the participants
were 18 utilities and nine oil & gas firms.

The 2006 Audit Committee Survey
involved audit committee chairs, audit
committee members and board members,
gathering information and gleaning
emerging best practices regarding their
oversight role, activities and the composition
of their committees.

The overall results indicate that audit
committees are evolving with the changing
business landscape, such as by
developing an increased awareness and
understanding of the risks facing businesses
today. Yet the survey results also
reflect a growing concern over escalating
responsibilities, the increasing need for
continuing education, and the shrinking
pool of available and qualified members
to serve in these critical corporate governance
roles.

One of the most interesting results of
Ernst & Young’s 2006 Audit Committee
Survey is the growing importance boards
are placing on proactively identifying and
managing risks related to business strategies
and objectives. That is especially true
for companies in industries such as banking,
insurance and telecommunications.
This paper will provide an overview of the
study’s findings and their implications for
the oil & gas and utility industry.

Oil & Gas and Utilities Audit Committes: Making the Risk Transition

Audit committee members for oil & gas
and utilities companies describe their current
primary responsibilities as providing
oversight for financial reporting, internal
controls and emerging risks. The focus
on emerging risks is relatively new and is
forcing these audit committees to stretch
their understanding of the business
beyond the core competencies of accounting,
capital structure, tax and credit risks.

Much like other industries, oil & gas and
utilities reported a clear understanding of
their companies’ risk profile from a strategic,
operational, financial and compliance
perspective. They also reported having
comprehensive processes and structures
in place to manage risks, but they appear
to be moving more slowly than companies
in other industry sectors when it comes
to fully transitioning audit committee
responsibilities to include oversight of
emerging risks.

For example:

  • Only 31 percent of oil & gas and utilities
    companies spend 20 percent or more
    of their meeting agendas on risk, compared
    with 46 percent of those from
    other sectors;
  • Only 37 percent of oil & gas and utilities
    companies receive quarterly risk
    updates, compared with 55 percent of
    those from other sectors; and
  • Only 36 percent of oil & gas and utilities
    companies use outside risk advisors,
    compared with 51 percent of those from
    other sectors.

Among the best practices identified by
participants in other sectors were the
establishment of a separate risk committee;
periodic meetings with regulators
to understand the breadth of emerging
issues; and monthly or quarterly reports
on risk provided by management.

Oil & gas and utilities audit committees
are less likely than other industry
sectors to have a separate subcommittee
on risk, and fewer than 10 percent have a
formal crisis plan for dealing with financial
errors, irregularities or compliance
issues. Yet they do see a need for greater
involvement in helping their companies
manage the risk landscape.

Audit Committe Opportunities

Of the companies surveyed, more than
half of their audit committee members
were between the ages of 61 and 70. This
tells us that in coming years there will
probably be substantial turnover among
audit committee members across many
sectors, representing both a challenge
and an opportunity.

The challenge is for organizations to
find and attract talented, experienced and
diversified directors who fully understand
the complexity and rapid pace of change
that is common in business today.

The opportunity for audit committees is
to expand the knowledge and skill of their
boards with men and women who have
strong backgrounds in managing risk and
deep experience in finance, accounting,
legal and regulatory affairs, with a more
complete understanding of topics such as
information technology.

In fact, corporate boards across all
industry sectors may be well=served to
re-evaluate their recruiting practices
and seek out audit committee members
under the age of 50 who can bring a more
diverse range of thought and experience
to their roles. The growing awareness of
the need for more hands-on risk management
guidance also means that audit
committees must include members who
take a more holistic view of today’s business
environment.

Narrowing the Focus: The Utility Sector

When comparing responses from the utility
sector alone with the 10 other industry
groups in the survey, we found that utility
audit committees often have a more
narrowly defined scope for risk oversight
– with a primary focus on accounting
and capital structure risks – than do
other industry sectors, which also focus
on tax and regulatory risk. As a general
rule, most utilities also do not face the
risks associated with global expansion
that many other sectors face. This might
explain, in part, why utility audit committees
spend less time than other sectors on
risk issues.

However, despite the more narrow
focus on risk to date, survey results
indicate that this attitude is changing.
Respondents cited the utility sector’s top
risk concerns as:

  • Regulatory issues;
  • Major initiatives;
  • People/human resources;
  • M&A; and
  • Market dynamics.

It is worth noting that utility company
respondents mentioned regulatory issues
as the top risk concern. This is likely
explained because the industry is facing
an expanding compliance and regulatory
risk universe as a result of Sarbanes-
Oxley 404, the FERC’s additional enforcement
powers under the Energy Policy
Act, and new Clean Air Act rules, which
are layered on top of ongoing requirements
related to rate making, reliability,
market practices and operational performance.

The utility sector’s focus on major
initiatives and people as the second and
third priority concerns is likely the result
of continued M&A and consolidation activity
on both the wholesale and retail sides
of the industry, as well as issues related to
an aging workforce and the challenge of
replacing retiring workers.

Shift in Risk Focus

Many utilities, their boards of directors
and their audit committees are only now
adjusting to the growing regulatory compliance
requirements – and recognizing
the time, resources and business implications
involved. Yet our survey shows
there are still others that have not yet
come to terms with the shifting tide of
risk oversight.

The companies that are behind the
curve are likely to be more vulnerable
to “risk surprises,” which are exactly
what good risk oversight practices are
intended to prevent. In all fairness, during
the past two years, utility respondents
indicated that they were not caught by
any unexpected risk developments, so the
processes in place to identify and manage
risk may be working well. Let’s hope so.
But with changes in the regulatory risk
landscape, it is not wise to rest on laurels.
All indications are that utility audit committees
will place more focus on regulatory
compliance risk in the years to come.

To support this expanded focus, utility
company audit committees will need to
request more information than they currently
get from management, and more
support from outside sources may be necessary
to identify potential shifts in state
and federal regulatory environments.

To their credit, more than 80 percent
of utility audit committees have a formal
orientation program and nearly half have
a formal continuing education program
that most respondents regard as effective.
They did acknowledge, however,
that greater education is needed for
audit committees in the areas of accounting
and financial reporting risk on one
hand and regulatory compliance risks
on the other. This interest in education
around regulatory risk is consistent
with the previously mentioned trends
emerging around the focus on regulatory
risk.

Industry Insights: The Oil & Gas Sector

Perhaps given the market fundamentals
facing oil & gas companies today – volatile
commodity prices, shifting geopolitical
environments, calls for new energy-related
mandates and a shrinking pool of qualified
employees – it is not surprising that oil &
gas audit committees identified people/
human resource issues and market dynamics
as two of the top five most important
risk factors their organizations face.

Top risk concerns among oil & gas
industry respondents were:

  • Legal issues;
  • Regulatory issues;
  • Hazards (environmental and safety);
  • Reputation; and
  • People/human resource issues.

The survey also found that, in general, oil
& gas company audit committees devote
more of their regular meeting agenda
to risk-related issues than utilities do.
To their credit, a majority of that time is
invested in actively discussing risk issues
and sharing viewpoints rather than listening
to presentations, which tend to dominate
many audit committee meetings in
other sectors.

Virtually all of the nine oil & gas participants
reported that they had a clear
understanding of their company’s risk
profile from a strategic, operational,
financial and compliance perspective.
And 75 percent indicated that they had
a comprehensive process and structure
in place to identify and manage risks
related to business goals and objectives.
Yet in contrast to the utility sector, 25
percent also acknowledged that their
audit committees had experienced significant
surprises in the past 24 months
with respect to new risk developments in
areas that were not addressed by existing
processes.

One area deserving attention for oil &
gas audit committees is that of continuing
education. While two-thirds have a formal
orientation program, only 11 percent have
a formal continuing education program.
Similar to utility respondents, oil & gas
audit committee members specifically
pointed to accounting and reporting financial
risk and regulatory compliance risk as
the top two areas where greater education
is needed.

Other Key Oil & Gas Findings

With regard to industry risk, most survey
respondents said they were planning for a
decrease in oil & gas prices within the next
12 months.

Nearly 60 percent said they had not
revised their company’s process for estimating
oil & gas reserves, and 71 percent
said there was no need to revise existing
SEC rules governing reserves. And while
the primary motivation cited for seeking
M&A partners was to add to their
reserve base, there was no similar focus
on building relationships with national oil
companies to improve access or develop
partnerships.

Future Challenges

New regulatory requirements and legislation,
particularly the highly demanding
Sarbanes-Oxley Act, and heightened
expectations among stockholders and
other investors are just some of the
myriad of issues weighing on the minds of
top-level management guiding today’s oil
& gas and utilities companies.

To meet these demands and stay competitive,
oil & gas and utilities companies
are working to bring more structure and
transparency to their operations and
to develop more effective methods for
identifying and controlling potential risks,
as well as preventing risk surprises. By
increasing time and activities focused on
risk, diversifying the talent on boards to
capture new areas of expertise and stepping
up continuing education, audit committees
will be better positioned to identify,
evaluate and manage risk exposure
for companies.

Harnessing the Power of the Intelligent Grid to Innovate / Enhance Efficiency and Reliability of Utility

The Challenge

The electric power infrastructure is
a foundation of American prosperity
and one of the key elements of
the digital economy of the future. This
vital asset is under pressure – issues
such as continuing growth in demand,
the importance of power quality and reliability
in a digital society, aging workforce
and assets, physical and cyber security
of the electric infrastructure and environmental
and cost pressures all combine to
drive the need for change. This change
can come in the form of implementing
an intelligent grid for the electric utility,
providing communications and computer
control to create a highly automated,
responsive and resilient power delivery
system.

Harnessing the Power of the Intelligent Grid

Addressing these challenges, IBM and
CenterPoint Energy Houston Electric,
LLC (CNP) – a Houston-based domestic
electric energy delivery company that
includes transmission and distribution
utility operations – are moving toward
modernizing CNP’s electric grid through
transforming business processes and reliability
and by utilizing advanced technology.
Much of the technology – hardware,
software, new materials – has already
been proven, by utility pioneers or by
other industries.

The intelligent grid utilizes technology
in three important ways: 1) automating
the grid to harden it and make it less
costly; 2) integrating the electric grid to
create an end-to-end network for quickly
acquiring and transporting data from
millions of end points; and 3) expanding
the value of the grid beyond typical utility
needs to support new services and new
markets offered by retailers.

IBM and CNP have developed an intelligent
grid road map that aligns with the
DOE’s “Grid 2030” and EPRI’s IntelliGrid
Framework. There are three key components
of the intelligent grid architecture
that include:

Event Avoidance

  • Remote load profiling/management
  • Grid event diagnostics
  • Advanced data analysis
  • Grid condition sensing and predictive
    response

Self-Healing Grid

  • Improved asset management/visibility
  • Real-time grid condition monitoring
  • Automated grid switching
  • Meter as a sensor
  • Transformer load management
  • Condition-based crew dispatching
  • Grid event detection and location

Advanced Meter Infrastructure

  • Meters
  • Meter interrogation
  • Meter connect/disconnect
  • Outage notification
  • Two-way communications with meters

The components of the intelligent grid
are the important building blocks of the
smart delivery systems. They help to
look at preventive care to the network
by identifying and repairing intermittent
grid problems to minimize outages. The
system is built with real-time sensing,
thus providing the ability to react to
disturbances and helping to maintain
a healthy and secure power grid. One
area not to overlook is the need for continuous
monitoring to be able to dynamically optimize the performance and
robustness of the power grid.

An intelligent grid becomes a sensing
network that connects all parts of the
electric power distribution infrastructure,
enabling automatic data collection, storage
and analytics to support management
of assets and operations with improved
observability, ultimately delivering efficient
system reliability. This allows sensor
devices such as meter relays to communicate
over the network via middleware
services that can connect and communicate
with both legacy and modern backoffice
systems as well as field operations
devices that monitor and control power
line equipment.

The back-office functions can include:
finance and administration, customer
management, human resources and procurement.
The field operations devices
can help a utility to manage asset life
cycle, advanced metering and mobile
workforces. This cycle can look at the
overall analytics and update the systems
to continuously provide the appropriate
feedback and data to support the required
back-end functions. This is all done over a
flexible and open architecture – one that is
safe and secure.

Advanced Metering Is Catalyst for the Intelligent Grid

The Energy Policy Act of 2005, Section
1252 requires regulatory commissions to
consider new standards relating to electric
rates and service, and encourages
“time-based pricing and other forms of
demand response, whereby electricity
customers are provided with electricity
price signals and the ability to benefit by
responding to them.” Advanced meters
and demand response use enabling
technologies, such as sensors, that act
as catalysts for the development of the
intelligent grid. The IBM/CNP Intelligent
Grid Solution involves installing, testing
and monitoring automated meter reading
(AMR) of electric meters, remote connection
and disconnection of electric service
and automated outage detection and restoration.
Broadband over powerline (BPL)
technology will be used for the data communications
network.

Communication

The CNP intelligent grid and advanced
metering strategy requires a communications
capability that enables extensive
real-time grid observability. This includes
monitoring, data transport and integration,
along with the analytics necessary
to provide input to automated processes
to support advanced decision making in
the areas of operations, customer services
and asset management. BPL was
chosen as the communication network
because it provides a robust, secure
communications infrastructure overlaying
the grid and is capable of managing
high-speed data flow for critical utility
applications. CNP has chosen an open
architecture and is working with IBM as
the system integrator to implement this
solution. This communication backhaul
network can be segregated into four
distinct segments or tiers:

  • Tier 1 – Major backhaul: data center to
    the substations
  • Tier 2 – Minor backhaul: substations to
    the intelligent grid device or meter relay
  • Tier 3 – Wireless meter data collector
    communications with the meter
  • Tier 4 – Meter to Zigbee wireless connection
    to home energy management
  • devices

This communication network can link
all the components of the intelligent
grid to provide a path for the data to be
transmitted back to the data centers for
processing.

Advanced Metering Infrastructure

Advanced metering provides the cornerstone
for the smart grid, enabling a more
fluid and competitive retail market while
enhancing a utility’s ability to improve
reliability, customer service, operational
efficiency and energy conservation.
Advanced metering and an intelligent grid
will also expand electric competition in
the Texas market by allowing retail energy
providers (REPs) to offer more services
without large investments in technology.
This will be accomplished by more
transparency of pricing and insight into
choices available in the market. Advanced
metering, acting as a cornerstone for the
smart grid, also provides a platform that
1) enables customers to make energy conservation
decisions that help protect the
environment, 2) affords utilities’ advanced
outage identification and enhanced power
restoration capabilities and 3) permits
the integration of energy produced by
customer-owned renewable sources (such
as solar or wind applications) into the
network.

The drivers for AMI included both business
and technical needs. The model
selected by IBM and CNP addresses issues
related to both AMI and service management,
as outlined below.

For AMI, business needs are focused
around the market terms and conditions
(timely and accurate monthly reads along
with on-demand reads) and compliance
with the Public Utility Commission of
Texas’ advanced metering rules that are
currently under development.

Evolving Role of MeteringAMI technical needs look at the meter
functions (voltage alerts, real-time measurements,
and time and date stamps)
and is built on an open architecture
(imperative for allowing the system to be
flexible and grow to meet changing technologies).
Automated meters play a critical
role in the intelligent grid architecture
by providing another sensing device.
This innovative approach transforms the
meter role from simply a usage recorder
into a network sensor and portal, thus
enabling the lag time, or the “latency,”
of providing meter information to the
network to be as low as possible. The current
work with CNP and IBM focuses on
understanding latency throughout the
intelligent grid in order to increase the
speed of data transfer to improve diagnostics
on system status, thus enabling
faster automated restoration of power
when outages occur.

For service management, business
needs dictate looking at ways to reduce
cost by exercising remote meter connect/disconnect, self-service portals and
self-healing systems. Key to this activity
will be the ability to gather meter data for
analysis and usage, voltage profiling and
load management.

The service management technical
needs require building an infrastructure
that can support the remote device control,
meter status reporting, outage/restoration
reporting and diagnostic and distribution
analytics. The meters will need to
have two-way communication capability
and provide data on theft and tampering
flags. The solution that CNP and IBM are
creating has the capability to send firmware
changes to meters to avoid having to
change them out as new market requirements
evolve. Figure 1 shows successive
increase in improvement as the scope of
metering is broadened.

AMI Selection

Open Metering SystemIBM helped CNP with the AMI selection criteria for both meter vendor and MDM
(meter data management) solutions for the proposed limited deployment of 10,000
meters in 2007. One of the key criteria was having an open system that was scalable
to meet CNP’s growing needs. The decision process spanned the following
key areas: regulatory, business, technical and operational. Figure 2 illustrates
the key criteria for each of these areas.

At the conclusion of the selection
process, Itron’s OpenWay™ CENTRON®
electricity meter and eMeter’s MDM solution
were chosen for the proposed limited
deployment.

The eMeter MDM is built on a core
application known as EnergyIP which has
adapters that can be built to interface
with legacy systems and the AMI network
and meter provider.

The MDM includes an AMI management
database that maintains the complex
relationship between the meter, account
premise, service point and communication
node. It processes real-time information
using an integrated message bus that
connects AMI meter systems to meter
data processing and business process
management applications. The eMeter
MDM also uses realtime messaging services
to connect interface adapters that
are tied to CNP’s legacy systems. The
MDM information can also be viewed via
web portals built by IBM using web services
APIs. The MDM collects meter reading
and event information via connection
to Itron’s Openway servers.

Itron’s OpenWay meter architecture is
a true two-way communication system
to the meters with the ability to make
firmware changes to each meter unit. The
meter’s communication across a radio frequency
network uses already-established
ANSI standard protocols (C12.22) and are
picked up via an OpenWay cell relay. This
OpenWay cell relay is connected to the
communication backhaul via interface with
the BPL boxes. Data is then sent over the
secure BPL communications network to
the OpenWay servers/collection engine.

The CNP AMI solution uses all the technologies
described to build a flexible and
innovative approach to the metering system.
The solution architecture, along with
the data access features, will allow thirdparty
portal access for retail energy providers
and support innovative customer
premise services and home area network
capabilities.

CenterPoint Energy Houston Electric, LLC is a subsidiary of CenterPoint
Energy, Inc., a domestic energy delivery company that includes electric transmission
and distribution, natural gas distribution, competitive natural gas sales and
services, interstate pipelines and field services operations.

The Key to Economic Expansion in Utilities

Today’s forces driving the global
energy and utility market have
never been more acute. Consumption
increases and volatile fuel
costs coupled with a major focus on the
environment present the industry with
some of the most significant challenges
that it has faced in the last few decades.
Diametrically opposed forces appear to
have converged upon the industry, highlighting
the importance of the policy and
investment decisions currently made on a
global basis. Understanding the impact of
policy on the long-term investment decisions
represents one of the most critical
aspects in the industry’s current strategy
and its future structure. How will the
industry be poised in a future of competing
scenarios and decisions?

Energy Consumption Drives Economic Growth

It is anticipated that world electricity
consumption will double between now
and 2030, with average annual increases
of 2.7 percent. Developing markets such
as India and China account for over 70
percent of that growth, with those markets
projected to grow at average annual
rates of 4.6 and 4.7 percent, respectively.
While the 1.5 percent growth projected in
the developed countries may seem small,

one needs to realize the efficiency gains
built into these projections. For example,
the United States has seen a nearly 2 percent
decline in energy intensity, which is
the measure of energy use per dollar of
GDP. Technological innovations have contributed
to these gains. This represents
good news for the developed economies
as they continue to grow with less of a
drain on energy resources.

No one debates the importance of
energy and the underlying infrastructure
to expanding and sustaining economic
growth in all markets. Without the underlying
investment in all of the energy value
chain, the world economy faces a daunting
challenge. The generation needs to
represent the majority of this necessary
investment. While coal-fired generation
has been – and is projected to be – the
bulk of the world’s generation resources,
investment in cleaner sources will be a
critical component of the future portfolio.
Developing and maintaining load capacity
flexibility represents a key challenge for
the industry in meeting current and future
energy consumption.

While these numbers are daunting,
they represent just part of the story.
Growing environmental concerns need
to be integrated into the overall strategy.
In addition, infrastructure requirements
and capital availability magnify the problem.
With this in the forefront of the minds
of utility executives and policy makers, the
future shape of the industry has begun
to emerge.

Environmental Concerns Versus Economics

While the environment has always been
important, its criticality seems to have
risen over the past few years. Daunting
statistics, such as the substantial increase
in carbon dioxide emissions related to fossil
fuel consumption, dominate the thinking
of the public, policy makers and industry
executives as they have never done
before. The consumer’s awareness and
willingness to engage in the debate drives
the industry to proactive strategies. Building
environmental concerns into overall
capacity planning drastically changes
the dynamic of this process. It will be a
unique challenge to understand how to
meet the demand discussed above without
permanently damaging our environment
in a world where coal represents the most
available and economical commodity. This
challenge leads to operational, policy and
technology innovation. The past debate
has in large part centered on economics
versus environment. Innovations in the
areas of clean coal, nuclear, and demand
response affords the industry the ability to
change the equation. The industry cannot
afford to continue this trade-off resulting
in transformational solutions. Public policy
needs to be an integral component of this.

Public Policy-Driven Goals

Recognizing the criticality of the electric
and gas utility infrastructure, policy makers have historically driven a monopolistic
or governmental-owned business model.
The model helped drive a relatively successful
and nonduplicative investment in
critical industry infrastructure, providing
an adequate vehicle for large capital
investments. Policy was focused on fair
and equal access to the necessary commodity.
While these models allowed for a
thoughtful build-out of infrastructure, it
resulted in certain market inefficiencies.

Policy makers reacted in different ways
to counteract these inefficiencies. In some
jurisdictions, regulators sought to drive
efficiencies through command and control
mechanisms. Strict regulations coupled
with post-mortem prudency reviews
became the proxy for customer interests.
Globally, policy makers continue to innovate
in the areas of regulation and industry
structure to effect this type of change.
Some notable examples of this over the
past decade include:

  • As part of the restructuring and privatization
    of the electric industry, the
    U.K. implemented a performance-based
    price control system. Under this system,
    the electric industry has been able to
    achieve significant efficiency gains and
    lower overall customer costs. This system
    allows for flexibility and provides
    incentives for innovation.
  • In 1999 the European Union began its
    journey toward an open electricity market.
    This called for a gradual introduction
    of competition to various customer
    classes, leading to a full open market in
    July of 2007. This policy has resulted
    in price reductions, especially in the
    wholesale market. Additionally, a fundamental
    shift in utility operations, strategies
    and business models has begun
    to emerge.
  • In 2002 the state of Texas adopted
    energy deregulation. To promote fair
    competition, the incumbent companies
    split into retail electric providers, transmission
    distribution service providers
    and wholesale generators. Retailers
    offered consumer choices based on
    the market price of electricity and were
    allowed to switch suppliers to better
    meet their needs. This model has promoted
    efficiency and innovation in
    the market.

While the tactics may change over the
years, the fundamental goal of policy
makers and regulators remains the same:
Provide the consumer with access to costeffective
and reliable energy on a sustainable
basis. With these goals in mind, continued
policy and regulatory innovation
will emerge in both developed and developing
markets. The regulatory bodies continue
to face the challenge of balancing
the needs of the consumer while promoting
innovation and investment.

Developing Markets Face Different Challenges

While the developed markets strive to
improve and strengthen their overall
system, the developing countries strive
to build theirs. Access and adequate supply
lie at the core of the challenges for
these markets. Reliability does not sit far
behind these as the markets mature and
compete in the global economy. Attracting
and retaining capital represents the key
issue in meeting these challenges. With
estimates in the billions of dollars, these
economies will struggle without sufficient
outside investment in the industry.

With this in mind, the major developing
markets have begun a fundamental
restructuring of the market, moving from
a government-owned and fully integrated
monopoly to a separated and privatized
market. This has resulted in the creation
of some of the largest stand-alone generation
and distribution companies in the
world. In addition to promoting focus, this
allows for an entirely new set of investment
sources for these markets. Retaining
these new investments through fair and
adequate returns remains critical to the
sustainability of these markets. The financial
community needs to be convinced of
the long-term viability of the structure to
continue its commitment.

Even though the utility industry by its
nature is local, global trends and forces
are becoming more important to the
industry. As the developing countries
enjoy exponential growth, their needs
account for a greater portion of the supply
needs of the industry. Increased workforce
and raw material demands in these
markets influence the overall dynamics
in the other markets. While commodity
markets (other than oil) remain largely
localized, the advent of LNG and other
advances have a more global impact. In
addition, local and global forces drive
innovation in the industry.

Industry Structure Continues to Evolve

On a global basis, the ownership and
operating structure of the industry continues
to evolve as companies and markets
emerge. While vertically integrated utilities
continue to play an important role in
the industry, functional and ownership
separation of business has occurred in
many markets. Jurisdictions have moved
forward with these models to enhance the
competitive market and focus investments
in particular segments. Moving to a fully
competitive market requires some level of
separation of the various components of
the value chain. Policy makers and strategists
will continue to strive for the unique
model that promotes competition while
retaining the inherent efficiencies of a
vertically integrated model.

Consolidation continues to progress
on a global basis. Large deals progress
in Europe, creating some of the biggest
utilities in the world. While scale remains
important, these represent strategic deals
that poise the companies for growth in a
highly competitive market. While these
large, complex deals present unique challenges
to all the stakeholders, the overall
benefits are significant. This marks the
beginning of truly global utilities. Private
equity investments continue to increase,
but have not had the impact previously
contemplated. It is likely that this activity
will increase, but only in certain segments
of the business. The current ownership
structure of the utility industry will continue
for the near future.

What Does the Future Hold?

The industry sits at somewhat of a tipping
point as market forces converge in
a unique way. While the past decade has
been challenging and interesting, the next
decade promises to be as exciting. The
many decisions facing each of the key
stakeholders require careful consideration
as these will have a lasting impact on the
industry. The trends that will follow are:

  • Public policy and regulation will emerge
    that fosters innovation and drives
    investment;
  • Consolidation will continue resulting in
    large multijurisdictional entities;
  • Competitive markets will continue to
    emerge fueled by the consumer’s desire
    for greater control over their energy
    choices;
  • Outside investment will foster significant
    innovation in the area of environmentally
    friendly generation sources;
    and
  • Emerging markets will receive significant
    capitalization allowing a more
    rapid attainment of full access.

The result of these future trends will be
a stronger, stable industry that not only
competes well in the global market, but
also provides the underpinning for the
expansion of the global economy.

Issues Affecting the Electricity Transmission and Distribution System in North America

Executive Summary

After more than two decades of underinvestment, the North
American electricity delivery infrastructure is struggling to meet
today’s changing and ever-growing demands. This was made
evident to the public on August 14, 2003, a warm, calm summer
day when large portions of the Midwest and Northeast United
States and Ontario, Canada, experienced a massive electric power
blackout. The outage affected an area with an estimated 50
million people and 61,800 megawatts (MW) of electric load in the
states of Ohio, Michigan, Pennsylvania, New York, Vermont,
Massachusetts, Connecticut, New Jersey and the Canadian
province of Ontario. The result of this event was to raise the
profile of electricity delivery issues to a place of prominence on the
political agendas of both the United States and Canada.

The North American transmission system, or “Grid,” is a complex
network of interconnected power lines that has evolved over the
last 100 years, delivering needed electricity from more than
950,000 MW of generating stations to well over 100 million
customers. This must be accomplished by constantly keeping the
supply and demand absolutely in balance at all times while
ensuring that customers can flip a switch and be certain that the
electricity they need will be available.

Creating policies to keep the system working is based on striking
a balance between three key drivers: adequate and reliable supply,
acceptable electricity prices and environmental sustainability.

Historically, as the system was developed, vertically integrated
regulated utilities were focused on delivering adequate supply and
reliability, building a system of generating stations and
transmission lines that would support the evolving needs of
electricity consumers. Over the last two decades, there has been
a trend toward deregulation of the sector, meaning the
unbundling of generation, transmission and distribution. The
objective was to develop efficiencies through competition and to
provide broader access to lower-cost electricity generation. In this
model, managing the operation of the grid has become the
purview of Independent System Operators (ISOs) and Regional
Transmission Organizations (RTOs).

Reliability of the system has been managed through adherence to
voluntary reliability standards established through the North
American Electric Reliability Corporation. The cascading events of
the 2003 blackout and the recognition that failure to meet these
voluntary standards was at fault made it clear that this is no
longer sufficient to meet the needs of the system. The major
recommendation of the United States/Canada Blackout Task Force
was to create the Electricity Reliability Organization (ERO), which
would make compliance with reliability standards mandatory, with
penalties for utilities that fail to meet these standards. This was
legislated in the US Energy Policy Act (EP Act) of 2005, and the
new ERO came into being earlier this year.

Improving the reliability of the system must also address its future
needs. Electricity consumption is forecast to grow by more than
40% by the year 2030. In the United States alone, spending on
transmission is growing, with an estimated $31.5 billion to be
spent by 2009. Opposition to new lines makes siting extremely
difficult, and, in many cases, primarily in the United States, cost
allocation is also a major issue. These two issues can and often
do cause delays in transmission projects. The US government has
identified a strong electricity infrastructure as essential to the
country’s economy. As a result, the EP Act of 2005 has put
measures in place to ensure that transmission is built when
needed. These measures include defining a process for the
establishment of national transmission corridors and offering
more attractive financial incentives to investors.

Delivering resolution of the above issues is a daunting task, and to
add to the challenge, this must all be accomplished in a changing
world. Environmental requirements are now causing every aspect
of how electricity is generated, delivered and consumed to be
questioned. Climate change is at the top of the political agenda,
and electricity generation through the use of fossil fuels is a major
source of greenhouse gases. The result is a strong push for new
energy efficiency and demand response programs to reduce
overall use and to shave the peak, while also driving the use of
more environmentally friendly “green” technologies for new
generation. Many of these renewable energy sources, such as
wind or solar energy, have technical characteristics very different
from the more traditional forms of generation. These sources
tend to be non dispatchable, meaning that their contribution to
the grid is intermittent and not easily controllable by the system
operator. This is creating new challenges for the system moving
forward.

Addressing many of these issues and moving toward the system
of the future requires the use of new enabling technologies. The
implementation of “smart” meters to enable time-of-use billing
and remote control devices to enable system-controlled demand
response is underway and is changing the very nature of the grid.
This will be followed by the implementation of a number of other
new technologies in the years to come.

This century-old system is experiencing a period of transition
unlike any that has been seen before. Improving the reliability of
the system, while meeting the requirements for growth, all in an
environment of enormous structural and technical change, is a
great challenge for the future. Achieving these goals will be
dependent upon having a well-trained and highly motivated
workforce. Yet the current work force is aging, with about 50%
eligible to retire in the next decade. Therefore, increased
pressures to meet reliability standards, build new transmission and
implement the technologies of the future will make workforce
management much more important.

Introduction

“On August 14, 2003, large portions of the Midwest and
Northeast United States and Ontario, Canada, experienced an
electric power blackout. The outage affected an area with an
estimated 50 million people and 61,800 megawatts (MW) of
electric load in the states of Ohio, Michigan, Pennsylvania, New
York, Vermont, Massachusetts, Connecticut, New Jersey and the
Canadian province of Ontario. The blackout began a few minutes
after 4:00 p.m. eastern daylight time (16:00 EDT) and power was
not restored for four days in some parts of the United States. Parts
of Ontario suffered rolling blackouts for more than a week before
full power was restored.”1

Modern society has come to depend on reliable electricity as an
essential resource for national security; health and welfare;
communications; finance; transportation; food and water supply;
heating, cooling and lighting; computers and electronics;
commercial enterprise; and even entertainment and leisure—in
short, nearly all aspects of modern life. Customers generally know
little about the system that provides them with their electricity, but
they do expect that electricity will be available when needed at
the flip of a switch. While most customers have experienced local
outages from time to time caused by an accident or severe
weather, what is not expected is the occurrence of a massive
outage on a calm, warm summer day. Widespread electrical
outages, such as the one that occurred on August 14, 2003, are
rare, but they can happen if multiple reliability safeguards break
down.

The North American transmission system is a complex network
that has evolved over the past century to meet the ever-growing
power-hungry needs of society. The system was developed on a
regional basis and expanded on a piecemeal basis, not planned in
an integrated fashion across the continent. The last 20 or 30
years have seen chronic under investment in new transmission, far
below investment in generation. The result is an aging
infrastructure that is falling apart at the seams. With current
environmental requirements and electricity deregulation changing
the rules of the game, the system is being challenged as never
before. And with projections of continuing growth and rapidly
changing technology, huge investments are required to improve
the reliability of the system today and to meet the needs of the
future. Unfortunately, it can take an extreme event such as the
2003 blackout to bring this issue to the forefront and drive
needed action.

In response to a request from ClickSoftware, this white paper will
examine the issues faced by the electricity transmission and
distribution system, including those related to the workforce, as it
struggles to move forward and keep the lights on for the people
of the United States and Canada.

North American Transmission System

The electricity delivery infrastructure represents the culmination of
100 years of development and growth to create a complex system
that must always be in balance while delivering the necessary
electricity from the source of supply to the end user. This
electricity infrastructure represents more than $1 trillion (U.S.) in
asset value, more than 200,000 miles (or 320,000 kilometers) of
transmission lines operating at 230,000 volts and greater,
950,000 megawatts of generating capability, and nearly 3,500
utility organizations serving well over 100 million customers and
283 million people.2

The system is built of a large number of generators of different
types – fossil, hydro, nuclear, wind, biomass and others –
producing electricity at low voltage. This electricity is then
“stepped up” to high voltage for delivery over a network of
interconnected bulk power lines, and then the voltage is once
again lowered for distribution to final customers. This system is
known as the “Grid.”

While customers are used to electricity always being available at the
flip of a switch, in reality, maintaining the grid is a very complex
undertaking that requires a real-time assessment, control and
coordination of the thousands of generators, high-voltage
transmission system and final distribution to customers. This is
because, in comparison to all other forms of energy, electricity
cannot be economically stored, so at all times, supply and demand
must be in complete balance.

The North American grid is largely interconnected but is not one
system. It is actually made up of three major systems. The Eastern
Interconnection includes the eastern two-thirds of the continental
United States and Canada, from Saskatchewan east to the Maritime
Provinces. The Western Interconnection includes the western third
of the continental United States (excluding Alaska), the Canadian
provinces of Alberta and British Columbia, and a portion of Baja
California Norte, Mexico. The third interconnection comprises most
of the state of Texas. In general, these three systems are completely
independent of one another, with only some minor DC connections
between them.

Even though the grids are primarily interconnected in a north /
south flow from Canada to the United States, there remain many
differences in policies between the two countries. In the United
States, electricity policy is established at the federal level and is
regulated by the Federal Energy Regulatory Commission (FERC). The
systems are then managed on a regional and local level by
individual utilities and market operators, which must meet the
requirements of public utilities commissions. In Canada, electricity
is the responsibility of the provinces and each province has its own
policies and regulations. There is little to no federal involvement in
the electricity sector.

Key Drivers Affecting
the Electricity System

Electricity policy is generally determined as a result of striving to
satisfy three key drivers: adequate supply and reliability (technical),
acceptable prices for customers (commercial) and environmental
concerns (social).

This was achieved in the past through the use of relatively large,
vertically integrated utilities that were responsible for all aspects of
electricity supply: generation, transmission and delivery to
customers. These companies operated in a defined service territory
and had an obligation to serve. The objective was to build a strong,
stable system that would assure adequate, reliable electricity supply
at the lowest possible cost. Consumers would pay regulatorapproved
prices based on cost of service. Environmental
requirements were met through adherence to regulations regarding
discharges of pollutants to air and water.

Keeping a balance between these drivers is an ongoing challenge
for policy makers and regulators, as often, these drivers can be in
conflict with one another. Overall, in North America, the low cost
of electricity has been a major contributor to industrial and
commercial development. Supply and reliability can always be
improved by increased investment, but at a higher cost to
consumers. Reducing environmental impacts also drives the system
to add higher cost, but lower emitting sources of supply.

Keeping a balance between these drivers is an ongoing challenge
for policy makers and regulators, as often, these drivers can be in
conflict with one another. Overall, in North America, the low cost
of electricity has been a major contributor to industrial and
commercial development. Supply and reliability can always be
improved by increased investment, but at a higher cost to
consumers. Reducing environmental impacts also drives the system
to add higher cost, but lower emitting sources of supply.

Traditionally, the emphasis has been on meeting technical
requirements to ensure an adequate and reliable system. More
recently, commercial requirements have become more important
and have resulted in deregulation to promote competition and
provide broader access to lower supply costs. Currently,
environmental issues are now having far-reaching effects on the
system as a whole that are driving major changes which will
dramatically affect the way the grid looks in the future.

Deregulation

More recently, to make the system more efficient and lead to lower
costs, many jurisdictions have decided to introduce competition
through deregulation. This was achieved by unbundling the services
into generation, transmission and distribution and by allowing
customers to chose their provider, at both the wholesale and retail
levels. The system is then managed by Independent System
Operators (ISOs), which operate the market.

As part of this restructuring, transmission systems have remained
regulated, due to the recognition that there is a need to make sure
the infrastructure is capable of delivering the electricity. This is being
done by the creation of Independent System Operators.
Wholesale access to transmission grids enables local distribution
companies, or other large buyers, to use the grid to purchase
electricity from the most competitive generation sources. Since the
issuance of Order 2000 in 1999, the FERC has promoted the
formation of Regional Transmission Organizations (RTOs) as the
mechanism to achieve wholesale access, thus enabling US
consumers to obtain lower-cost power from other regions. Finally,
retail access could economically benefit consumers as a result of
their having choice among suppliers.

British Columbia Energy Plan
Zeros in on New Greenhouse Gases

The new BC Energy Plan: A Vision for Clean Energy Leadership
puts British Columbia at the forefront with aggressive targets for
zero net greenhouse gas emissions, new investments in
innovation and an ambitious target to acquire 50% of BC
Hydro’s incremental resource needs through conservation by
2020. Among the highlights:

Environmental Leadership:

  • All new electricity projects developed in BC will have zero
    net greenhouse gas emissions.
  • Existing thermal generation power plants will reach zero
    net greenhouse gas emissions by 2016.
  • Achieve zero greenhouse gas emissions from coal-fired
    electricity generation.
  • Clean or renewable electricity generation will continue to
    account for at least 90% of total generation, placing
    the province’s standard among the top jurisdictions in
    the world.
  • Eliminate all routine flaring at oil-and-gas-producing wells
    and production facilities by 2016 with an interim goal to
    reduce flaring by half (50%) by 2011.
  • Achieve the best coalbed gas practices in North America.
    Companies will not be allowed to surface dischargeproduced
    water, and any reinjected produced water must
    be injected well below any domestic water aquifer.

Energy Conservation and Efficiency

  • An ambitious target to acquire 50& of BC Hydro’s
    incremental resource needs through conservation by 2020.
  • New energy efficiency standards will be determined and
    implemented for buildings by 2010.

Source: BC Ministry Web Site Feb 27, 2007

Deregulation has been successful in jurisdictions with adequate
supply, primarily by increasing the efficiency of existing assets.
However, one of the major issues associated with deregulation is
that there is no longer an obligation to serve since the assumption
is that market-pricing mechanisms would ensure adequate supply.
The result has been somewhat less success in creating appropriate
and timely incentives to build new generation. This has had an
adverse effect on meeting adequacy of supply and reliability
requirements. Markets continue to evolve to meet the ongoing
needs of the system.

Global Warming

Concerns about the environment and pollution have long affected
the choice of generation options and have increased the costs of
generation, such as coal, as pollution abatement equipment has
been added to plants.

At the present time, there is no greater driver to change in the
electricity system than the environment. Environmental issues and,
in particular, global warming have leapt to the top of the global
agenda. A Movie and presentations on global warming have made
former US Vice President Al Gore into a modern cult icon. Recent
reports by Stern in the UK and the IPCC have removed any doubt
as to the importance to the planet of global climate change and the
need to take immediate and decisive action.

Global warming is a result of greenhouse gases entering the
atmosphere from burning fossil fuels. This comes primarily from
two major industries: transportation and electricity generation.
Since it is feasible to generate electricity without burning fossil fuels
through the use of renewable energy sources, including wind,
hydro and biomass and nuclear power, there is considerable
pressure on the electricity industry to reduce its emissions of
Greenhouse Gases (GHGs).

Both the International Energy Agency World Energy Outlook 2006
(WEO) and the US Department of Energy’s Energy Information
Administration (DOE EIA) Annual Energy Outlook 2007 (AEO)
Reference Case clearly show that continuing down the current
policy path will lead to increased use of fossil fuels over the next 25
years, with resultant accelerating increases in carbon dioxide
emissions. The WEO then goes on to state that this result is not set
in stone and that in an alternate policy scenario, the policies and
measures that governments are currently considering to mitigate
carbon emissions are assumed to be implemented. The result is
significantly reduced fossil fuel demand and associated emissions.

It would seem that almost every day, a government or government
agency is announcing new measures to protect the environment.
California has already introduced measures to reduce greenhouse
gases. More recently, the government of British Columbia
announced its “zero emissions” energy plan (see box). And many
more announcements are imminent.

Security of Supply

The need for oil imports from the Middle East has shown how
vulnerable America is to what it considers very unstable political and
potentially anti-American regimes. Since September 11th 2001,
this has added the issue of security of supply to energy
considerations. The result of this concern has been policy incentives
in the EP Act of 2005 by the administration that emphasize energy
independence. These include renewal of the use of nuclear power
and considerable emphasis on continued use of coal, a dirty but
domestically plentiful resource. Considerable research funds are
being invested in new coal technologies, such as “clean coal” and
carbon sequestration, to enable coal to continue to be used, but in
a more environmentally friendly manner.

On the other hand, Canada has been blessed with almost limitless
energy resources, and the current government is very focused on
developing Canada as an “energy superpower,” in part to help
meet the ever-growing needs of the United States. The multitude
of energy choices available in Canada has led to significant regional
differences; Alberta is moving forward with research into new clean
coal technologies while Ontario is committing to shutting down all
coal-burning facilities at the earliest opportunity.

Reliability – Keeping the Lights On

Electric reliability means continuity of service and acceptable power
quality. North Americans have come to expect a very high level of
reliability from the electricity system. Occasional blackouts and / or
brownouts are unacceptable to consumers. The expectation is that
when the switch is turned on, the lights will come on, all the time.

Poor system reliability imposes significant economic consequences
on society. Estimates of the total costs of the 2003 blackout in the
United States range between $4 billion and $10 billion (US dollars).
In Canada, gross domestic product was down 0.7% in August,
there was a net loss of 18.9 million work hours and manufacturing
shipments in Ontario were down $2.3 billion (Canadian dollars).3

Public safety is at risk, as without power, controls of essential
systems (e.g. Traffic lights, public transit, hospital emergency
services, etc) are lost. A large outage in the cold winter months can
leave many freezing in the dark. Many industries are dependent
upon large-volume, reliable power to drive their factories and
processes. Concern over the reliability in a given area can drive
businesses to locate to areas that have more reliable systems, thus
greatly impacting local economies.

Reliability has two key aspects. The first is adequacy of supply,
which means having enough generation and transmission capacity
to meet system need. The second is short-term or operating
reliability, which requires the system to withstand disturbances or
contingencies and be able to continue to operate when there are
problems with the infrastructure or interconnected systems.

The National Electricity Reliability Corporation (NERC) is an industry
organization that draws upon the technical expertise of its
members. NERC has ten regional councils, comprising about 140
control areas in Canada, the US and the northern Baja region of
Mexico. Most Canadian electric utilities/system operators that have
interconnections with other regions are members of NERC’s
regional councils.

Why must there be a crisis to improve reliability?

Prior to the August 2003 blackout, much of the attention
being paid to the electricity industry was related to
generation and the alternatives available to meet demand.
The emphasis was on deregulation as a means to create
competition to both increase efficiency and bring down
costs.

The blackout created the crisis necessary to get political
focus on the deficiencies in the grid. And it made it clear
that the system must be improved. A joint US/Canada task
force studied the event for one year and concluded that lack
of adherence to voluntary reliability standards by operators
working to manage the aging infrastructure was the
primary cause.

Its major recommendation was to create an Electricity
Reliability Organization (ERO), which would make
compliance with reliability standards mandatory, rather than
voluntary, as is currently the case with NERC standards.
One year later, in 2005, this recommendation was enacted
in legislation in the EP Act of 2005. And now, after
approving NERC as the ERO in July 2006, the new ERO has
started operations as of January 2007, some four years after
the event.

It is interesting to note that it was another crisis, the big
blackout of 1965, that resulted in the creation of NERC (in
1968) and management of the reliability of the system.

NERC’s stated mission is “to ensure that the bulk electric system in
North America is reliable, adequate and secure.” Toward that end,
the organization develops planning standards and operating
policies, which are the main methods it employs to achieve
reliability. However, in the past, its standards and policies were
voluntary and were enforced by peer pressure.

In July 2006, FERC designated NERC as the ERO under section 215
of the Federal Power Act, a new provision added by the Energy
Policy Act of 2005 to establish a system of mandatory, enforceable
reliability standards under the Commission’s oversight. The ERO will
manage reliability by proposing standards, which are to be
approved by FERC, and then to enforce these standards and levy
fines for non-compliance, subject to FERC approval. Most
Canadian provinces have negotiated participation so that there will
be clear, continent-wide reliability standards and enforceability.

On average, most customer outage incidents are due to distribution
system problems. Some of the most common causes of distribution
outages include scheduled outages, loss of supply, tree contact,
lightning, defective equipment, adverse weather and the human
element. These results suggest that, from the consumer viewpoint,
the reliability of the bulk power system is somewhat higher than
that of the distribution system.

This is consistent with the general view that the flexibility in the bulk
delivery system enables system operators to compensate for
contingencies. For example, if a generating unit experiences a
technical problem and must shut down, the system operator can
call on reserve margins to meet demand. If a transmission line trips
off, the power can flow across different lines so that demand is still
satisfied in each area. In the absence of exceptional circumstances,
consumers will not be aware of the disturbance. However, when
larger bulk system outages occur, they affect more people and tend
to last longer, as demonstrated by the 2003 blackout.

Distribution systems, on the other hand, have less flexibility because
they have less redundancy built into them. The cost of duplicating
the infrastructure would be high and as disturbances on these
systems do not affect as many people, the benefits would be small.
A lack of redundancy and generally longer distribution lines also
mean that rural consumers experience lower reliability than urban
consumers do. This puts added pressure on distributors to be able
to respond to outages and take corrective actions quickly.

So how is reliability enhanced? Primarily through investment in new
generation and transmission to build a stronger system or by
reducing demands on the system in some other manner.

Enhancing and Expanding the Power Grid – Building New Transmission

System improvements and expansion are required to continue
improving the reliability of the existing system, replace aging
infrastructure and accommodate electricity demand growth. The
US DOE EIA Annual Energy Outlook 2007 Reference Case forecasts
an annual growth rate in electricity consumption of 1.5% per year,
for a total increase of 43% by 2030. Growth rates in the Canadian
provinces are projected to be somewhat similar. And all of this
increase cannot be accommodated without increased transmission
infrastructure.

As a result, investment in transmission is continuing to increase at a
very rapid rate. Looking across the continent, major plans to add
transmission and distribution infrastructure is universal.

US EP Act of 2005 Designates National Corridors

The EP Act of 2005 directed the secretary of energy to conduct
a nation-wide study of electric transmission congestion by
August 8, 2006. Based upon the congestion study, comments
thereon and considerations that include economics, reliability,
fuel diversity, national energy policy and national security, the
secretary may designate “any geographic area experiencing
electric energy transmission capacity constraints or congestion
that adversely affects customers as a national interest electric
transmission corridor.”

Now that this study is complete, the DOE expects to open a
dialogue with stakeholders in areas of the nation where
congestion is a matter of concern, focusing on ways that
congestion problems might be alleviated. Where appropriate in
relation to these areas, the department may designate national
interest electric transmission corridors.

Source: US DOE National Electric Transmission Congestion
Study, August 2006

In the United States, spending reached $5.8 billion in 2005, an 18%
increase from the previous year. And spending is anticipated to
continue to grow, with $31.5 billion planned to be spent by 2009.
In addition to transmission, accommodating both the significant
replacement needs of the aging distribution infrastructure and
continued growth will require that $14 billion be spent on average
over the next ten years.

In Canada, Ontario has issued a draft plan showing a need for
spending in excess of $4.5 billion in transmission and distribution,
and British Columbia is planning to spend $3.2 billion over the next
ten years. Both jurisdictions have identified that transmission
investment is the highest priority to protect the integrity of the
system in the short to medium term.

Building new transmission lines is anything but easy. Lines can
cover long distances and pass through many communities, which all
have a say in their approval. Although the objective is to minimize
the impact on the community, explaining the benefits and necessity
of the project to community members is complex, and in many
cases the benefits of the project can be outside of the community
being impacted. Therefore, project delays are almost inevitable
during the planning stages as transmission companies work to
address the issues of siting and cost Allocation. This can create
considerable pressure to reduce build times once approvals are
secured.

Siting

In most cases, building transmission is more difficult than building
new generation as there are frequently several alternatives for
routing (siting) the line. When it comes to transmission projects,
there is a very strong NIMBY (Not in My Back Yard) factor. In fact,
there is a whole industry associated with working with communities
to fight new projects of this type. It has become increasingly
difficult to site new projects, as many opponents are now working
according to the BANANA (Build Absolutely Nothing Anywhere
Near Anyone) principle.

Since these power lines pass over long distances, many
communities are affected. It is thus not always simple to
demonstrate the benefit of a new transmission line to a local
community. Often the benefit is construed to be too far away for
communities that are in need of this new system link.

Many regulators in the United States have clear siting rules and
policies in place. This, coupled with strong utility community
relations programs, definitely helps to get approvals in a more
timely fashion. And new policies in the EP Act of 2005 are also
designed to facilitate new build transmission projects (see section
on strengthening interconnections below).

In Canada, siting issues can be even more difficult, as distances can
be even longer and most transmission projects will pass through
aboriginal lands. The project sponsor must then secure an
agreement with each band whose land the project will impact. In
some projects, this can mean 50 or more agreements, any one of
which can stop the project.

Cost Allocation

In the United States, new projects often go beyond the territory of
one regulator. Regulators frequently have no hard and fast rules on
cost allocation and usually address these issues on a case-by-case
basis. This can create delays in project implementation, and
unpopular rulings may make a project not viable.

In Canada, since most transmission does not cross regional lines,
cost allocation is not an issue. It is within the responsibility of the
provincial regulator to approve the costs and allocate them to the
rate base.

Strengthening Interconnections

Historically, the electric systems in North America were vertically
integrated and each was responsible for a given territory. Whether
companies were public or private, they focused on providing service
to their customers in their regulated service territories. External
trade and energy transfers were of secondary concern.

One of the factors influencing electricity sector deregulation was
that many customers in the higher cost regions of the United States
had no access to lower-cost electricity from other areas. To address
this issue, since the issue of Order 2000 in 1999, the US Federal
Energy Regulatory Commission (FERC) has promoted the formation
of Regional Transmission Organizations (RTOs) as the mechanism to
achieve this wholesale access. The structure is intended to promote
competition by providing non-discriminatory access to transmission
within the RTO area and to eliminate excessive transmission use
charges to reduce costs.

It is generally accepted that increasing interconnections also
increases system reliability, as it makes the system more flexible to
accommodate faults. On the other hand, as seen in the 2003
blackout, the risks can also be higher, hence the need for more
stringent reliability standards, as the system will only be as strong as
its weakest link.

Improvements in interconnections will also result in less congestion.
While congestion is not a result of deregulation, the unbundling of
the system has highlighted the need to address it. In the EP Act of
2005, the US government acknowledged the importance of a
strong national grid, and therefore mandated regular congestion
studies and created the opportunity to create national electric
transmission corridors (see box) to enable new transmission in areas
where there is a need to reduce congestion. In addition, the EP Act
has directed FERC to establish, by rule, an incentive-based rate
treatment for the transmission of electricity in interstate commerce
by public utilities to benefit customers through increased reliability
and reduced congestion.

In Canada, regulating and authorizing the construction and
operation of international power lines and designated interprovincial
lines under federal jurisdiction is the responsibility of the
National Energy Board.

An East-West Grid in Canada?

Regional integration of the electric transmission grid is relatively
strong, with most Canadian electricity connected and flowing in
a north-south direction. In Canada, electricity is under provincial
jurisdiction and the amount of interconnection across provinces
is relatively weak.

Following the 2003 blackout there has been considerable
interest in improving the east-west connections and further
integrating the Canadian grid. This has substantial difficulties,
as the distances are very long, meaning that integration would
be costly and stability would be difficult.

However, there has been good progress in the consideration of
new, broader east-west regional connections. Ontario and
Quebec are improving their interconnection, and there are
proposals to greatly improve the interconnections between BC
and Alberta.

In March 2007, the federal government announced that more
than $500 million of investment through its eco trust is
earmarked to support Ontario’s initiative to create an
interconnection with Manitoba.

Non-Wire solutions

Not all solutions for improving reliability and increasing the flexibility
of transmission systems require new investment in transmission.
Transmission is only one corner of a triangle in which all elements
have to be in balance to create a strong, reliable system. The others
are generation and demand management.

When looking at the need for a new transmission project,
consideration must be given to alternative solutions. One is to
provide new generation closer to the loads. This is becoming
increasingly difficult in the deregulated environment, as locating
generation facilities is not easily accomplished. However, even in
open markets, market operators do have the flexibility to offer
incentives to generators that locate in areas that improve the
reliability of the overall system.

Of increasing interest is the ability to control demand. It is
becoming widely accepted that the lowest-cost KWh is the one not
generated. In the past, demand-reduction programs have been
difficult to implement, as utilities saw little benefit in spending
money to reduce their overall revenues. Therefore, it has become
increasingly important to ensure that programs are well defined so
that regulators will allow the cost of these programs into the rate
base.

The benefit of demand-reduction programs is that they tend to
satisfy all three key drivers affecting the electricity system. Demand
reduction increases supply adequacy and reliability, reduces total
cost to consumers and positively impacts the environment. In fact,
it is the environmental benefits that are driving strong interest in
these programs today. Increased efficiency is the only source of
supply that has zero environmental impact. The US EP Act of 2005
places significant importance on energy efficiency through
mandating improved standards and providing incentives for energy
efficiency programs.

There are two types of demand reduction. In Demand
Management (or efficiency) programs, the total usage of electricity
is reduced, and in Demand Response programs, the emphasis is on
reducing usage during peak times through either temporary
reductions in load or shifting loads to off-peak hours.

It is interesting to note that in Canada, the term “conservation”
continues to be used, while in the United States the term
“efficiency” is more prevalent. While they have the same
objectives, the connotations are considerably different.
“Conservation” still has the connotation of some level of sacrifice
or doing without – as in turning down the thermostat and wearing
a sweater. On the other hand, “efficiency” is all about technology
and achieving the same level of comfort with less. In any case,
there are as many efficiency programs in place as there are
jurisdictions. In fact, lack of uniformity and local and regional
differences in programs are cause for concern as they have the
potential to dilute the benefits of these programs and, in some
cases, cause customer confusion. Most programs to reduce usage
are focused on improving energy efficiency standards for various
types of equipment and then providing incentives to encourage
their rapid assimilation into society.

What is new is the increased emphasis on demand response, or
peak shifting. This has the most impact on transmission issues, as
reducing demand at peak times reduces congestion so that new
transmission can be deferred or cancelled altogether. Deregulated
markets have provided new ways to address this concern. For the
first time, pricing mechanisms are being used to try to change
customer behavior.

This is now possible due to the availability of technology to
implement these programs. For example, implementing time-ofuse
pricing to encourage time shifting of load requires metering
that can provide the data to utilities on time of use. Other
programs, in which automatic controls are put on large appliances
such as air conditioners so that utilities can cycle them off remotely
at times of peak demand, are also possible. Customers who opt for
such programs are offered pricing benefits. In the past, there were
no technologies in place to enable programs such as these.

New Challenges to the Transmission Infrastructure

The requirements for new transmission to replace aging
infrastructure, improve reliability and meet the ever-increasing
growth in electricity demand are certainly enough to stress the
system in terms of resources, both financial and human. However,
this is not all. From deregulation causing uncertainty in generator
type and location, to increased use of renewables with inherent
characteristics that have not been experienced on the grid before,
to the rapid change and requirement for new technologies, the key
strategies in place to address the key drivers in Section 3 place new
and previously unknown challenges on the system.

Independent Power Producers

In the many deregulated markets throughout North America, there
are many independent power producers. Generators build and
connect to the grid depending upon the type of fuel and the nature
of their generation. In some markets, they may be totally merchant
plants, and in others long-term Power Purchase Agreements may
be acceptable.

Different generators can connect to the grid from different
locations. This puts added stress on transmission planning, as
planners do not know the locations of all the future generation in
advance. This means that more robust transmission is required to
accommodate the many possible generating locations. Of course,
the price of connecting to the grid will have to be included in the
costs of the generator, making poor locations more expensive than
good ones. But on the other hand, there is no certainty that
locations very important to the system will end up with appropriate
generation.

There is also no certainty in the type of generation being added to
the grid. A nuclear plant’s technical operation characteristics differ
from those of a gas-fired plant or a wind farm. The new grid must
take all these things into consideration.

Growth of Wind Generation in North America

Wind generation continues to grow in both Canada and the
United States.

By more than doubling its total installed capacity to 1,460 MW
by year end 2006, Canada became the world’s 12th largest
nation in terms of installed wind energy capacity. Provincial
governments are currently seeking to put in place a minimum of
10,000 MW of installed wind energy capacity.

The US wind energy industry installed 2,454 (MW) of new
generating capacity in 2006, an investment of approximately $4
billion, making wind one of the largest sources of new power
generation in the country – second only to natural gas – for the
second year in a row. New wind farms boosted cumulative US
installed wind energy capacity by 27% to 11,603 MW.

Source: American and Canadian Wind Energy Associations

Renewables and Distributed Generation

Environmental concerns have led to a very significant political
commitment to new renewable resources. The EP Act of 2005 and
the energy plans of the individual states and Canadian provinces all
provide renewable incentives, either through the tax system (such
as production tax credits) or through incentives by use of either (or
both) renewable obligations and feed-in tariffs. The US DOE EIA
Annual Energy Outlook Reference Case forecasts that renewable
generation will increase by 1.5% per year to 2030, or by 45%.
However, it acknowledges that new strategies to address global
warming will likely put pressure on this number to increase. In its
World Energy Outlook, the IEA recommended that the United
States increase its share of renewables to achieve the alternative
policy scenario.

Specifically, the US EP Act promotes renewable energy resources,
including hydropower. It extends through the end of 2007 the tax
credit for wind, closed-loop and open-loop biomass facilities,
geothermal, small irrigation power, landfill gas and trash
combustion facilities. There are increased tax credits for solar
energy, and new tax credits for fuel cells and distributed generation.

The main renewables being implemented are wind, solar, biomass
and hydro. Of these, only large hydro and biomass are the
traditional dispatchable type of generation that can be controlled as
required by the system. Renewables such as wind, solar and, in
some cases, small hydro, are non-dispatchable or “intermittent”
resources. This means that they are not necessarily available when
needed by the system, but rather when the resource is available,
such as when the wind blows or the sun shines. This has profound
effects on the management of the bulk electric transmission
system. It has effects on system stability and changes the
requirements for system reserve allowances and standby capacity.
In most systems, these forms of generation are run as “base load,”
meaning that they are dispatched first, or, in this case, whenever
available. This may displace more economic generation, thus
increasing electricity costs. ISOs are now starting to understand
how to integrate this form of generation into the grid. Many
studies have been done to investigate the impact of this on the
system and to set targets for maximum tolerable amounts of this
type of generation.

In addition to their intermittency, these resources are not
transportable to a specific site, i.e, the generation facility must be
built where the resource is. Once again, this places new challenges
on the system, as often the best wind can be long distances from
the required load. And given that it can come on and off the grid
at somewhat unpredictable rates, this will have an impact on the
system design and management.

The above applies to larger-scale facilities, such as wind farms.
Renewable generation is also more amenable to more local or
distributed generation. For example, individual homes or
businesses can install solar panels or small wind turbines on their
roofs, which would contribute electricity to the grid at some times;
at other times their homes would be required to use grid-based
power. This means new challenges for the distribution systems as
customers can now also be generators.

Smart Meters and Other Technology Advancements

As discussed earlier, there are many changes in the ways that
electricity is being generated, controlled and paid for. All of these
changes require technology to enable them.

In order for prices to be used to influence behavior, it is essential to
monitor electricity usage as a function of time. Only then can
policies be put in place to charge for time of use. The technology
to achieve this objective is known as “smart metering.” This
requires a large-scale change from the current mechanical bulk
meters that are used to measure electricity usage to new electronic
“smart” meters. There is no one specification for these meters. In
general, they can measure, maintain and transmit usage data to the
utility automatically on a frequency of interest to that utility.

Smart meter implementation is now rampant. The EP Act of 2005
requires each electric utility to provide each of its customer classes,
and individual customers at their own request, a time-based rate
schedule. In addition, the Act goes on to specify that each electric
utility shall provide each customer requesting a time-based rate
with a time-based meter capable of having the utility offer this rate
structure. As a result, most jurisdictions within the United States are
either implementing or have plans to implement smart meters. In
Canada, Ontario has mandated that all consumer bulk meters be
changed to smart meters by 2010 (4.3 million meters), with the first
800,000 meters to be installed before the end of 2007.

Smart meters themselves do not change behavior. Achieving the
desired result is a function of the program design. The cost of
implementing the new meters is significant, and the demonstrated
net cost savings to the consumer must be real and measurable
within a reasonable time frame. There are as many programs as
there are utilities, and a large number of papers describing how to
go about and how not to go about offering time based rates. The
extent to which prices must vary from time to time to encourage
behavioral change remains unclear. While there is considerable
hope for this program, at this stage of its implementation, the level
of success remains uncertain.

The grid is a complex, interconnected system, but it is one where
there is a one way flow – from generation to final users. This is no
longer the case as small generators are added to customer locations
so that at some times of the day they can be users, and other times,
producers. This is the case when customers install their own small
generation so they can either send electricity to the grid or accept
electricity from the grid, depending upon circumstances at the time.
Smart meters are also required to enable this “net metering.”

Other technologies are now being implemented for the purpose of
demand response. For example, utilities are installing remote
controlling technology so that the utility can control equipment and
take it out of service during times of peak demand if supply is at
risk.

A typical demand-response system would have a peak-saver switch
installed on a central air conditioner. During critical times (typically
on hot summer days), a signal can be sent to cycle the system down
to reduce the amount of electricity it uses. No change in
temperature would have been noticed. Typical activation would
occur when the electricity supply was reaching its peak, usually on
hot summer weekdays between 2 p.m. and 6 p.m. The activation
period would not exceed four hours. Yet the benefit to the system
is dramatic. Since air conditioning loads are the largest contributor
to the summer peak, widespread use of this technology would
reduce congestion and the need for additional generation and
transmission.

As the grid evolves into the grid of the future, two-way
communication will be required to remotely control loads to help
manage the system. High-quality up-to-the-minute information will
assist customers, generators and utilities in taking the necessary
actions to keep the system running smoothly.

There are also several new technologies on the horizon for future
implementation to the transmission network. The US EP Act of
2005 provides incentives for a large range of technologies: from
high temperature lines to underground cables, from new
transmission component materials to wireless power transmission
and new electricity storage technologies.

Workforce Issues

Underinvestment in the transmission and distribution system over
the last 20 years has led to an equivalent loss of opportunity to
develop and maintain the work force.

The work force is aging and retirements are looming. A 2004 study
by the Canadian Electricity Association4 noted that workers
between the ages of 40 and 54 make up nearly two-thirds of the
total workforce. For trade-related occupations, over one-quarter of
employees were 50 years of age or older. And of more concern,
only about 7% of the workers were less than 30 years old. The
transmission sector had the most employees eligible to retire, with
almost 30% eligible within five years, and 50% within the next ten
years. This is two-thirds higher over the next 5 years than the total
electricity sector average.

This would be the case if the sector were expected to be stagnant.
But as has already been seen, investment in new infrastructure is
expected to grow significantly over the next 25 years, meaning that
there is a looming shortfall in workers on the horizon unless
immediate action is taken.

Retirement is seen as the number-one work force issue by electricity
sector companies. Retirement has implications much broader than
simply a worker shortage. As workers retire, they take experience
and knowledge with them that, without planning and training, can
be lost to the utility.

Previous sections of this report have discussed the transformation in
technologies in the electricity sector. Application of these new
technologies will put added pressure on the field workforce. New
skills will be required to meet the needs of the technology. Working
on renewable technologies, such as wind or solar, requires new and
more multidisciplinary capabilities to keep them running reliably.

More complex interactive systems will mean that more worker
knowledge will be required to work on these systems. Training will
have to be increased and workers with new and different skills will
have to be added to the mix.

Implications for the workforce

The implications for the workforce are clear. New mandatory
reliability standards will impose more rigorous requirements on field
maintenance and times required to return system faults to service.
This, coupled with a shrinking work force and added technical
requirements, means that there will be a need to ensure that the
right workers are at the right place to do the right work in the
shortest period of time. Therefore, powerful workforce
management will be a necessity to keep the system up and running
reliably.

Appendix: Glossary of Terms

The following are terms used throughout this document.

Annual Energy Outlook (AEO) – Report prepared by the US
Department of Energy’s Energy Information Administration on an
annual basis to project the trends in energy in the United States.
The AEO 2007 predicts trends to 2030.

BANANA (Build Absolutely Nothing Anywhere Near Anyone) – Term used to define opposition to projects.

Demand Management – Program to reduce electricity usage
through conservation, improved efficiency or other means.

Demand Response – Program to reduce demand during peak
periods through a signal by the system to the customer. Reduction
can be automated or manual.

Electricity Policy Act of 2005 (EP Act 2005) – US legislation
passed in 2005 defining current United States energy policy.

Electricity Reliability Organization (ERO) – Organization to
manage reliability by proposing standards, which are to be
approved by FERC, and then to enforce these standards and levy
fines for non-compliance, subject to FERC approval.

Energy Information Administration (EIA) – Department with
the US Department of Energy (DOE) that is responsible for collecting
official energy statistics from the US government.

Federal Energy Regulatory Commission (FERC) – The agency
that regulates and oversees energy industries in the economic,
environmental, and safety interests of the American public.

Grid – The electricity transmission and distribution delivery system.

Independent System Operator (ISO) – The market operator in
deregulated markets.

International Energy Agency (IEA) – This agency acts as energy
policy advisor to 26 Member countries in their effort to ensure
reliable, affordable and clean energy for their citizens. The IEA
conducts a broad program of energy research, data compilation,
publications and public dissemination of the latest energy policy
analysis and recommendations on good practices.

Intergovernmental Panel on Climate Change (IPCC)
Recognizing the problem of potential global climate change, the
United Nations established the Intergovernmental Panel on Climate
Change to assess on a comprehensive, objective, open and
transparent basis the scientific, technical and socio-economic
information relevant to understanding the scientific basis of the risk
of human-induced climate change, its potential impacts and
options for adaptation and mitigation.

National Electricity Reliability Corporation (NERC) – NERC’s
mission is to improve the reliability and security of the bulk power
system in North America. To achieve that, NERC develops and
enforces reliability standards; monitors the bulk power system;
assesses future adequacy; audits owners, operators, and users for
preparedness; and educates and trains industry personnel.

National Energy Board (NEB) – This is an independent Canadian
federal agency that regulates several aspects of Canada’s energy
industry. Its purpose is to promote safety and security,
environmental protection, and efficient energy infrastructure and
markets in the Canadian public interest within the mandate set by
Parliament in the regulation of pipelines, energy development and
trade.

NIMBY (Not in My Back Yard) – A term used for public opposition
to projects within or close to the opponents’ community.

Regional Transmission Operator (RTO) – Regional Transmission
Organizations were created in the United States as the mechanism
by which to achieve wholesale access to transmission in
deregulated markets.

Siting – Term used for selecting routing for new transmission and
distribution and then securing the required approvals for building in
that location.

Smart Meters – Smart meters can measure, maintain and transmit
electricity usage data to a utility automatically on a frequency of
interest to that utility.

World Energy Outlook (WEO) – The WEO is an analysis prepared
by the IEA of the impact of current energy policies, projecting a
vision of how energy markets are likely to evolve. An alternative
scenario analyzes the potential impact of a number of additional
measures to impact energy security and climate change and their
costs.

The Water Industry in Britain and Europe: Issues for 2006 and Beyond

Executive Summary

Europe’s water industry is grappling with six major issues in 2006
that present often-conflicting interests and that may need new
and different approaches to running a water business in order to
meet regulatory and stakeholder expectations.

  • The Environment: U.K. and European legislation governing
    the quality of drinking water, rivers and bathing waters, and
    effluent discharges is becoming increasingly stringent and is
    the major driver of water industry investment across Europe.
  • Resources: Population movements, demographic changes
    and greater use of water-consuming household goods,
    coupled with higher standards of living, mean that water use
    is increasing while water resources, not only in drier southern
    Europe, but in northern Europe and in Britain, are coming
    under increasing pressure. With environmental issues around
    the building of new reservoirs and the over abstraction of
    water from rivers, the water industry has to look to demand
    management, conservation and leakage initiatives.
  • Customers: Customer expectations of what they expect
    from their water supplier have risen, particularly where
    privatization and competition have affected and upgraded
    other utility provision such as electricity, gas and telecoms.
  • Investment and finance: Increasingly the public sector in
    Europe has been unable to afford to finance a water sector
    facing increased environmental and customer obligations,
    and the private sector has been playing a greater role. Private
    water companies have operated water franchises in France
    for many years, while privatized water companies in England
    have been a success story in terms of the amount of
    investment they have delivered since 1989.
  • Asset management: Increased environmental obligations
    and customer expectations have led to an increase in the
    sheer amount of water and wastewater infrastructure in
    Europe – the bathing water, urban wastewater and
    framework directives have necessitated the building of many
    sewage treatment plants across Europe, for example. This,
    and the need to look after clean water mains in order to
    address leakage in times of water scarcity, means that asset
    management, always a mainstay of water industry policy,
    has become even more important.
  • Structure: The water industry in Europe has traditionally
    been municipality based, but as new obligations have
    necessitated greater private sector financing, so the industry
    has been forced to look at new ways to organize itself, be
    that through privatization, franchises, public–private
    partnerships, consolidation or outsourcing.

Introduction

The water industry in Europe is a sometimes-contradictory mix of
public sector, municipal control and private sector finance and
investment. The prevailing model is one of municipal control with
increasing private sector franchise operation, a model pioneered
in France but now expanding across the European Union. The
exception is Britain, where regional water authorities based on
river basins were privatized in 1989 and where today private
sector water companies operate.

Whatever the ownership and structural model, a number of
policy and operational issues drive the European water sector.
These are the environmental agenda and resources, especially the
European Water Framework Directive, customer obligations,
investment and finance, asset management, and structure.

The Environmental Agenda

Water has to be taken from the ground or from rivers and after
use be treated and returned to rivers or to the sea. It is therefore
subject to a raft of environmental legislation that exceeds that to
which other utilities such as energy have to submit.

In Europe, a number of European Commission directives provide
legally binding parameters within which water undertakers must
operate. These are primarily the European Drinking Water
Directive, which controls the quality of water from European
customers’ taps; the Urban Waste Water Directive, which
controls the quality of effluent from sewage treatment works;
the Bathing Water Directive, which controls the quality of
effluent discharged at coastal sites; and the Water Framework
Directive, the newest and biggest piece of legislation, which
brings all these under a strategic umbrella.

Member states that fail to meet the standards set down in these
directives can, and are, prosecuted and fined.

The Drinking Water Directive
Environmental legislation next examines the quality of potable
water. The Drinking Water Directive is the strictest legislation in
the world in terms of the parameters it sets out restricting the
number of substances permitted in the water consumed across
Europe. The parameters cover a range of around fifty substances
from lead to potassium, from iodine to chlorine, but also cover
aesthetic issues such as color and odor.

Water companies and authorities can and do devote much of
their investment and technology toward ensuring that their
water treatment works produce drinking water that complies
with the Drinking Water Directive – and are prosecuted if they do
not. This is even though only about 3 percent of the water
treated at treatment works and that issues from customers’ taps
is actually drunk.

Cryptosporidium
One specific challenge the water industry faces on the drinking
water front is the cryptosporidium. This is a parasite originating
from farm animals that leaches into aquifers, that is extremely
difficult to detect and eliminate from water, and that causes
stomach upsets.

Much investment has been made in filtration and monitoring at
water treatment works to deal with this issue.

Diffuse pollution
A wider issue of diffuse pollution of water aquifers from
industrial, urban runoff and agricultural sources is an increasing
challenge for the water industry across Europe.

While it has been possible to control more obvious pollution of
the water environment using the so-called end-of-pipe solution,
diffuse contamination of aquifers and rivers from, for example,
agricultural pesticides, has been less easy to manage.

Diffuse pollution is usually something water companies have no
control over, yet they are obliged to remove its impact from the
water they treat. The Water Framework Directive is partly
designed to embrace this issue and provide a holistic policy for
the whole water environment to ensure that all stakeholders and
participants, from industry to agriculture to recreational water
users to water companies, meet their obligations toward a
cleaner and uncontaminated water environment.

Dirty water
Potable water, whether drunk, used for cleaning or washing
purposes, flushed down toilets, or used by industry or for
agricultural use, then becomes wastewater. It is collected in
drains and sewers and for the most part arrives at wastewater
treatment plants for treatment before it is returned to the water
environment, either to rivers or to the sea.

The Urban Waste Water Directive regulates the quality of water
discharged to rivers or seas near towns and cities. The Bathing
Water Directive specifically regulates the quality of effluent
discharged anywhere near where people might swim or surf in
coastal areas.

Over the last fifteen years, the parameters laid down in these
directives have again driven water industry investment in
treatment and monitoring. Where once untreated sewage might
have been discharged into the sea via short pipe (outfall), for
example, the strictures laid down in these directives have now in
many cases meant that the water industry across Europe has
increasingly had to install primary, secondary and even tertiary
(often ultraviolet) treatment along with long sea outfalls.

The result of this environmental legislation has been an
unprecedented ramping up of investment in wastewater
treatment and a corresponding improvement in the quality of
Europe’s rivers and coastal waters. Much is still to be done in this
area, especially in the newer EU members in central and Eastern
Europe, where a legacy of industrial and agricultural pollution
and a relative lack of wastewater treatment and infrastructure
has still to be addressed in terms of its effect on the
water environment.

The last part of the water cycle is the disposal of sewage sludge.
While treated wastewater effluent can be safely returned to
rivers or to the sea, semisolid sludge, which has to be disposed
of, remains.

If effectively treated, it can be used as fertilizer or can be reduced
to pellet form for landfill. In some instances, it can be incinerated
and in some instances be used for the generation of energy.

Resources

Resources are the first step on the path of water environmental
legislation. Water companies and authorities are limited in terms
of how much water they can abstract from surface water, in
particular before they start to damage the water environment.
European directives prohibit water companies from over
abstraction, which in turn provides an incentive, if one were
needed, to implement demand management in times of water
stress, such as hot summers and peak demand.

Clean water
At the start of the water industry cycle, water has to be taken, or
abstracted, from surface water, rivers and lakes, or groundwater,
underground aquifers. The first issue to address is therefore the
availability and sustainability of water resources.

Here geography and climate are the key issues. In northern
Europe rainfall has always been thought to be relatively plentiful,
while in southern Europe the climate is assumed to be generally
drier. However, the resources issue is not that simple.

There are climatic issues that are not always generally realized.
For instance, London actually has less rainfall than Istanbul.
Northern Europe may be colder than the South, but rainfall is not
that much higher than in many southern European countries –
the pattern is just different. Winter rain in southern Europe is also
more useful than summer rain in the North.

The other major factor dictating the resources issue is where
people live. Major cities, relatively affluent urban areas and
industrial activity put pressure on water resources, whatever the
availability of resources.

The Customer Agenda

Unlike other utilities, water remains a monopoly for all but the
largest industrial customers. Bringing choice and competition, as
has happened with electricity and gas, has not been possible in
water and is unlikely ever to happen.

But this does not mean that customer expectations have not
risen over the past two decades nor that regulatory pressure for
the water industry in Europe to improve the service it provides for
customers has not increased.

The water industry in Europe finds itself increasingly having to
bridge the gap between customer expectation and service
delivery. If environmental legislation means that water quality is
increasingly addressed, there are still issues around enhancing
service at all points of customer contact, including, crucially, the
customers’ bills. The importance of customer confidence is also a
key to the means for growth.

Service in the water industry is typically driven by regulators.
Service has historically been poor, as has been the case in many
monopoly industries with a history of under-investment, but is
improving, especially in countries with a high level of private
sector involvement, such as Britain and France.

Key service areas are better-quality water, better pipes, consistent
pressure, hours of opening for inquiries, responses to letters and
calls, and improvements to sewers and wastewater treatment.

Traditionally, the relationship between the water industry and the
customer has been identified by a “suits us” arrogance,
insensitivity and an asset-focused approach.

Although many water customers care little about their water
suppliers, there are still areas where customers have contact with
their water companies and may have cause for dissatisfaction.
These are outbound mailings, product use, roadwork, meter
reading and other operational contacts. In Britain alone this adds
up to 100 million experiences every day.

But what does a negative or positive customer experience matter
to a monopoly? Increasingly, water companies and authorities
have to answer to a number of challenges. These are regulated
targets; differentiation with comparators, which can influence
the ability to invest; the ability to raise prices; and reputation in
the market, which again can influence the ability to invest or
attract good-quality people.

Challenges affecting customer management
The water industry across Europe faces a number of specific
challenges affecting customer management and billing. That is
notwithstanding the fact that in many places customers are not
yet billed specifically for their water, or when they are, they are
not yet billed on a metered basis for the water they use –
although the tide is moving toward water billing based on usage.

The water industry faces challenges in terms of aging technology,
financial pressure, more demanding customers and
shifting regulation.

Aging technology is exemplified by legacy IT, some of it
becoming increasingly unsupported. There is often poor
functionality, leading to manual processes that are often costly
and inefficient. This leads to inflexibility, with a corresponding
high cost of change, while high maintenance costs develop
associated with costly mainframe technology.

Financial pressures stem from huge levels of capital investment
that are required, often as a result of the parameters laid down
in European directives – these add up to £16.8 billion over the
next five years in the U.K. alone – and the limited amount water
companies and authorities can demand for price increases.

Regulators also rightly demand greater efficiencies from the
water industry. Another financial pressure is customer debt –
water bills are often low on a customer’s list of priorities, and in
most instances, water companies and authorities cannot
disconnect customers who do not pay.

In terms of demanding customers, expectations are being driven
up across Europe by the development of a consumer culture. This
is fueled in markets like Britain where there is competition in
other traditional utility sectors, such as electricity, gas and
telecoms, and where this has driven other utilities to develop
new and better service offerings.

Customers are also getting used to “channel expectations” –
being able to use Internet and self-service payment options,
for example.

Levels and quality of regulation for water vary across Europe,
with Britain, with its early privatization, probably the best
example. Regulators, where they exist, demand cost savings,
making process efficiency key. There is also a focus on relative
performance between different monopoly water businesses and
a move toward more qualitative measures in this respect.

Faced with these challenges, a water utility needs a customer
management and billing approach that contributes to
dramatically reducing the cost to serve, is flexible and able to
adapt to future developments, provides an improved customer
experience, and can be realistically delivered with the least
operational risk.

Across the range of customer service experience, there are plenty
of opportunities to reduce costs and improve customer service.
These are bill consolidation, billing and payments consolidation
of multiple bill platforms, IT infrastructure, credit and collections,
call center operations, meter reading and field force services.

Enhanced water customer service could manifest itself as
increased first-time resolution of questions and complaints,
efficient resolution of these questions and complaints, tailored
billing for industrial and commercial customers, the ability to
study a bill online, and integrated – and accurate – billing.

Investment and Finance

The water industry in much of Europe remains a publicly
financed, municipally administered business. However, the
private sector and private capital are playing an increasing role,
albeit more on the French franchise model than the British fully
privatized model.

Where the water industry is fundamentally in private hands,
whether through full ownership or franchise, it is seen by
investors as a safe sector, with predictable earnings and cash
flow. Water is seen as lower risk than other corporate sectors and
utilities and also lower risk than other regulated businesses,
perhaps with the exception of electricity transmission.

Compared with other sectors, water in Europe is generally cash
flow negative, with ongoing borrowing requirements for the
foreseeable future, and subject to changes in nation state
regulation and environmental standards from European
Commission regulation.

Private sector water businesses therefore have to focus on
delivery, preparation for the next price review and future
borrowing needs. On delivery they have to reach OpEx and
CapEx efficiencies and try to operate as close as they can to the
frontier of regulatory expectation. In readiness for future price
reviews, they must maintain credit quality and improve their
profile in terms of financeability. And they must match future
borrowing needs with access to the financial markets and
credit ratings.

For private sector water businesses in Europe, the challenge is to
ensure adequate financing and incentives for long-term capital
investment. This is not always helped by the regulatory cycle – in
England and Wales there are issues around balancing the fiveyear
investment cycle arising from the five-year price review with
long-term environmental targets.

The England and Wales price review allows for future legislative
changes and gives guidance regarding European environmental
standards. But in other areas the five-year horizon presents
potential conflicts of interest.

Asset Management

Regulated private sector water utilities in Europe are faced with
the challenge of managing and maintaining their capital assets,
water and wastewater treatment works, water mains and
sewers. This is often undertaken with a number of suppliers and
contractors, under pressure to provide services at the lowest
possible cost.

Regulators set tough targets for savings on capex and opex,
along with an imperative to manage assets in a safe, secure and
sustainable way.

Taking the U.K. as an example, legacy asset and work
management systems have often not been designed to meet the
increasingly complex information requirements of today’s water
industry. Leakage has become an increasingly potent issue, with
water companies under pressure to radically remedy their
leakage problems before being allowed to develop
new resources.

Water companies in the U.K. and across the rest of Europe often
find that their asset information is, where it exists, fragmented,
making optimal asset performance difficult.

An additional pressure for water industry asset managers is the
increased attention being paid to streetworks activity. In some
countries, such as Britain, water companies can now incur
penalties if they spend too long on a streetworks job or dig up
the wrong piece of road because of inaccurate information as to
where an asset was.

Structure

Although there have always been some private sector water
companies in Europe, the majority have been state or
municipality operations. That is changing.

England and Wales underwent privatization of its water
authorities in 1989. Few other European countries have chosen
to go down this route, but many have sought to emulate the
French option, where a municipality retains ownership of its
water assets but franchises out the operation and maintenance
of those assets to a private company on a fifteen-to-twenty-year
concession basis.

The number of private sector players in Europe’s water scene is
fairly small. France has Veolia, Suez and Bouygues. Spain’s
biggest water company is Agbar, partly state owned. Britain has
ten water companies, but the biggest, Thames Water, is
currently owned by German energy giant RWE (though soon to
be sold), and while the next biggest, AWG and Severn Trent,
remain independent, others have different ownership – Wessex
Water is owned by Malaysian conglomerate YTL, for example.

The likelihood is that Europe’s water industry, from an
operational viewpoint, will come to be dominated by French (and
perhaps English-based), Spanish and German (Berlinwasser)
players – but operating on a franchise basis rather than outright
ownership. Who buys Thames Water, whether it remains intact
and whatever strategic direction the new owner takes, it will be
a crucial part of the future European water jigsaw.

Conclusion

Europe’s water utilities are among the most efficient in the world
and indeed many, from Thames Water to Veolia to Águas de
Portugal, have exported their operations to North and South
America, Africa and elsewhere. But ever-tightening
environmental standards, rising customer expectations,
regulatory pressure, and the rigors of environmental
contamination and shortages of the natural resource itself mean
that Europe’s water industry will have to find innovative ways of
managing, financing and structuring itself – coupled with
innovative use of IT and technology – to overcome the challenges
of the future.

That means working with the private sector and working in
partnerships with other organizations that can bring specific skills
to bear on areas of water industry operations, such as asset and
workforce management, which present unique challenges, will
be important to meet the expectations of regulators,
shareholders and customers.


ClickSoftware for Water Utilities Solution

ClickSoftware’s packaged offering for water utilities draws on our
depth of experience in serving water utility customers around the
world. The result is an out-of-the-box solution, preconfigured with
industry best practices, designed to minimize the time, cost and
risk associated with optimization technology implementations.
Offering fully automated optimization, ClickSoftware for Water
Utilities helps service managers ensure all scheduling decisions are
consistent with service policy while minimizing the need
for human intervention. For more information e-mail
sales@clicksoftware.com

The Three Pillars of Energy Policy

Successful energy policy is about trade-offs between different and
divergent agendas that must somehow be reconciled. Energy
policy, whether in the U.K., in Europe or globally, fundamentally
rests on three pillars—the customers, requiring safe, affordable
and available energy; the environment; and, in the case of
investor-funded utilities, the investor.

Each pillar is, to some extent, protected by a regulator and
underpinned by a foundation, namely energy policy itself.

How we got here

So how has energy policy come to be defined by customer,
environment and shareholder agendas which are inherently in
conflict? In the U.K. before privatization and before the Kyoto
protocols on climate change and emissions—which committed
Britain and most of the rest of the world to reducing carbon
emissions from, among other things, power stations—there were
more certainties surrounding the energy sector.

There were no shareholders to satisfy, no Ofgem or Environment
Agency, no global warming agenda and plentiful North Sea gas
and a nuclear fleet with years of life left in it. There was no
competition either, so customers, with no choice, consumed what
they were given. Energy companies regulated themselves, within
the restraints of government policy.

Fifteen years after privatization all the certainties have gone. With
privatization has come regulation and competition, along with the
globalization of the energy market and multinational ownership.
Climate change and carbon reduction are cornerstones of U.K.
energy policy and the U.K.’s North Sea gas reserves and the
nuclear power station fleet are nearing the ends of their lives.

The first pillar – the customer

The starting point for the first pillar of energy policy is that ,
increasingly, the customer has choice. All over the world, energy
markets are turning to deregulated, liberalized and competitive
models. It is true that this does not prevail everywhere, but in
most Western economies the market and competition are seen as
the most effective mechanisms for an effective energy industry,
driving efficiency and empowering customers.

Even where there is less appetite for a private sector competitive
market politically, privatization is often the only economically
viable way for a country’s energy industry to move forward, with
the public sector no longer able to sustain it.

When a customer has the ability to switch energy suppliers, that
customer is empowered in a way that could not have been
imagined in the days of monopoly. Energy, though, remains a
commodity market in all but name, with little to differentiate
offerings—apart from price. Competitive energy, where it exists
today, is a price-driven market.

But this does not mean the energy customer holds all the power.
Competitive energy retail is at the mercy of wholesale electricity
and gas prices. Because these are in turn affected by the price of
oil, wholesale energy prices are subject to volatility. In the U.K., for
example, we have seen supplier after supplier increase the price of
electricity and gas to the customer.

Oil prices are also on the rise, and with current political and
economic instability not about to go away, wholesale energy
prices are set to stay high for the foreseeable future. This is driven
by long-term demand and geopolitical issues, as well as by new
short-term and localized issues, such as the effect of the recent
U.S. hurricanes on oil supplies.

The Holy Grail of competitive energy retail strategy in terms of
customers is to be able to add value in an otherwise commodity
market. This has been done successfully with dual fuel offerings,
but utilities that tried to bundle a number of utility services
together have now retreated from this position.

Energy companies may now try to add value to energy offerings
through tariff adjustments, billing frequency, energy saving advice
and services, and smart metering. Enhanced service offerings can
also be a way of adding value. Some suppliers have tried to add
value by offering ancillary services in conjunction with scheme
partners such as Homeserve, which offers householders a range
of specialist insurance services.

For the industrial, business and commercial customer the concept
of an “energy service company” (or ESCO) has long been talked
about as an evolution of an electricity and gas supplier into an
organization that is more of an energy management partner.

But above all, what do customers want? They want low prices,
security of supply with zero interruptions, and accurate excellent
customer service. While low prices and security of supply will
continue to be at issue for all utilities, it seems the future in terms
of building and sustaining customer loyalty may therefore be
doing better at the basics. Highlighting service quality as part of
the brand offering and then exploiting this to retain and attract
customers may be less dramatic than multi-utility services but is
more likely to be understood by customers and appeal to them in
an age of often remote call centers and uncertain appointment
times.

Many utility companies are already shifting their service strategies
toward offering better customer service with the assistance of
service optimization technology. Service optimization technology
helps companies plan for service demand and schedule service
visits so that the utility is maximizing the productivity of its field
workforce. As a result, utilities are able to address customer
service calls more quickly than before and are often able to give
customers shorter appointment windows, freeing customers from
waiting at home all day long for a service engineer to arrive.

Companies such as United Utilities and Anglian Water in the U.K.,
Badenova in Germany, Antwerp Waterworks in Belgium and
Pacific Gas and Electric in the United States have all turned to
service optimization over the last year as a means of improving
productivity in their service organizations.

And the same technology that enables better productivity, and
thus happier customers, has tangible benefits for the utility
company as well. Service optimization technology works by
removing inefficiencies from the process. Thus, it helps companies
do more with the same amount of resources. It is a win-win
situation for both the customer and the utility.

But the question remains whether the customers’ desire for low
prices and high standards of service can be reconciled with
security of supply issues and environmental concerns. So let’s take
a look at the second pillar.

The second pillar—the environment

Energy companies are now at the center of the environmental
debate. Governments have set emissions targets and promoted
renewable energy as part of global and national policy to address
climate change and global warming, to which the power
generation sector makes such a significant contribution.

Trading in emissions between the rich developed nations and the
developing world has of course distorted this issue.

Emissions trading has enabled some economies to continue to
emit carbon, but ultimately there is still an overriding need to
reduce generation capacity that produces carbon—coal and gas
generation. It also means looking at energy saving initiatives, with
which customers will be comfortable, and pricing mechanisms to
manage demand, with which they will be less comfortable.

In recent months the generation debate has begun to swing away
from renewables as the one-stop answer to greener energy. The
problem is that while wind generation is the most widespread
new form of renewable energy, wind turbines work for only
around 30-per cent of the time when there is wind. Nuclear
generation, on the other hand, is also zero-emissions and is highly
suitable as baseload generation—in other words, with plants
being operational all the time.

Britain’s prime minister, Tony Blair, announced at this year’s Labour
Party conference that his government would at least consider
building -a new generation of nuclear power stations to replace
those built in the 1950s, ‘60s and ‘70s. These are currently due to
be decommissioned by 2014 and produce more than 20 per-cent
of the country’s electricity. At least one of these nuclear power
stations, at Dungeness, may now have its life extended by five
years or more.

Of course the argument against new nuclear is not just the
environmentalists’ argument about waste and safety—it is an
economic one too. Nuclear power was once going to be “too
cheap to meter”. In fact, as we have seen with the financial
difficulties of U.K. nuclear power generator British Energy, the
fluctuations of the wholesale electricity market can be crippling
for baseload generators, which do not have the operational
flexibility of, for example, gas generators.

On the whole, the environmental agenda for energy—away from
relatively inexpensive gas and coal generation—toward more
expensive and less reliable renewables, with uncertainty over new
nuclear, does not match the customer agenda of cheap electricity
whenever they want it. Moreover, the very fact that price has
been the prime driver in the recruiting and retaining of customers
in the developed economies is at odds with the central
environmental proposition, which is to cut energy consumption.

Populations in emerging economies want higher standards of
living and Western-style consumer goods and demand for
electricity is growing exponentially in such places as China and
India. On the whole, this electricity is coming largely through not
very environmentally friendly coal and gas generation, with some
nuclear.

-The huge demand for energy in China is not going to be
addressed by building wind turbines, however environmentally
friendly this may be. New coal-fuelled power stations are opening
there every few weeks, because China has massive reserves of
coal and this represents the cheapest source of energy, however
massive the environmental implications.

Can technology help? In the long term, technology could yet come
to the environment’s rescue. It may be possible to take carbon,
which is emitted through electricity generation, and bury it
underground, perhaps in voids left by natural gas or oil extraction
—this is known as carbon sequestration, or carbon capture. In this
way gas and “clean coal” plants would still be environmentally
acceptable, because carbon emissions would be managed.

Other technological breakthroughs, such as using hydrogen as a
fuel, could help establish a low-carbon economy in the future.
And there are other forms of renewable generation such as solar
and wave power that could ultimately be more reliable than wind.
Micro CHP at a domestic consumer level is another potential
option for future distributed electricity generation. But all these
developments need support from government and from investors.

Regulatory standards will encourage utilities themselves to
become more environmentally efficient. As well as reducing
emissions from power stations, these regulations also translate
into cutting pollution from the utility’s business operations. One
way to effect this change is by better managing the company’s
field labor force. Put simply, if a utility’s field force can make fewer
journeys in the field and make each journey more efficient, the
environment will benefit.

This is another area in which service optimization technology can
help utilities. Using optimization technology, a utility can
consistently schedule field engineers in such a way as to minimize
travel. By scheduling one engineer to several customers in the
same area, utilities can minimize drive time and thus fuel
consumption and exhaust pollution.

And lower fuel costs in the field will also benefit an energy
company’s bottom line.

In terms of transmission, the establishment of a “greener” field
force will reduce travel through optimized routing, reduce paperwork
through mobile communication, and help fast and
appropriate responses to environmental emergencies.

In terms of retail, facilitating the “educated customer” can
provide the customer with insight into his or her consumption,
using smart-metering capabilities and establishing a meter
replacement plan in an optimal manner. And appointment
booking online can increase the likelihood of a customer being at
home when the engineer arrives—again reducing unnecessary
travel.

The third pillar—the investor

In investor-owned utilities the ultimate aim is to deliver
shareholder value. And this third pillar of energy policy may not
necessarily chime with the customer or environmental agenda.

The investor’s main agenda is return on capital, preferably over
the shorter term. But carbon sequestration or building a new
generation of nuclear power stations is a long-term proposition.

Governments set energy policies, but increasingly it is expected
that the market will deliver them. Investors will choose to invest in
projects only if the returns are at a suitable level. That means—in
the case of a generation plant, for example—that it is worth
investing in a power station only if the wholesale price of electricity
is at a sufficiently high level to make the plant profitable.

Recently in the U.K. energy prices fell, and it was no longer
economic to build new plant. Moreover, old plants were
mothballed, and capacity was reduced, with resulting bankruptcies.

Switching from the energy sector was a good decision for
investors—but the resultant reduction in the generation capacity
margin was not necessarily a good decision for customers.

If the stalling of investment means stalling on building new
environmentally friendly or renewable generation plants, it is not
necessarily good for the environment either.

The investor’s view is inevitably more short term than an energy
industry engineer’s view. And the decision regarding when to
invest rests on when the return on the investment is deemed
appropriate rather than when customers or the environment
require that investment to be made.

It is also worth noting that different private sector investor groups
will tolerate and absorb different risks in the energy delivery chain.
Governments, in wanting to see their energy policies delivered, do
not always understand this.

But a workable energy policy can be reached only by
understanding what is commercially and practically possible.

One thing is certain. The role of the private sector investor in the
energy industry is going to be omnipresent and permanent. Few
governments can sustain state-funded investment in energy even
if they wanted to, although this does not mean that governments
should become disinterested in the energy sector. They will
continue to play a key role in shaping energy policy.

The plinth—security post-Kyoto

Customers, the environment and investors all need to have an
energy market that delivers optimal security of supply—within a
low-carbon context. Customers want the lights to stay on, the
environment needs carbon emissions drastically reduced and
investors do not want to see shareholder value destroyed by
blackouts and breaches of environmental targets.

Reconciling security of supply with low carbon is the fundamental
conundrum facing the energy sector in Britain, in Europe and
worldwide. This is particularly acute in the developing world,
where economic drivers for growth take precedence over
environmental factors.

To feed the rapidly expanding growth in energy demand, China
has turned to its plentiful coal reserves to ensure security of supply
and meet the needs of customers. This it can do—albeit at the
cost of the lives of thousands of Chinese coal miners every year.
And the cost to the environment is enormous.

Other countries, notably Denmark, Germany and Spain, have
made significant investment in wind turbine generation. This
clearly is totally in tune with low-carbon aspirations. But with
wind turbines, as we have seen, working only 30 per-cent of the
time, the contribution to security of supply can be only peripheral.
Indeed, any wind turbine capacity installed needs to be backed up
by a gas generation plant.

In terms of global security of supply, there is enough gas and coal
and enough potential nuclear generation, backed up by solar
power and wind turbines, to match demand for some time to
come. Unfortunately, take out the carbon-emitting generation
capacity and that picture starts to look different.

Security of supply in the U.K.

How can the three pillars of customer, the environment and the
investor be supported here in Britain? How can the trade-offs be
made between their different demands in a context of security of
supply and the desire to reduce carbon emissions?

In Britain today, 40 per-cent of power is generated from gas, 35
per cent from coal, 20 per cent from nuclear, 4.6 per-cent from
renewables including hydro and around half a per cent from oil.
As we have seen, the scenario for the next couple of decades or
so has to change.

The government has set an ambitious renewables target of 10
per-cent by 2015, with that nuclear power capacity of 20 per cent
set to decline as plants reach the end of their lives around 2012-
2016.

Many of the coal plants will have to be extensively and expensively
modified to meet new environmental regulations, while domestic
gas supplies are running out and Britain will become reliant on gas
imported through continental interconnectors or by ship in LNG
(liquefied natural gas) form from farther afield.

A decision on whether or not to embark upon a new generation
of nuclear power stations may be two years away, although the
government has promised an energy review in 2006. Even so, the
six new plants that might be built would realistically replace only
most of the lost 20 per cent (not all—Sizewell B will go on into
the 2020s).

With gas, the declining supplies from the U.K. Continental Shelf
can be sourced elsewhere, with new LNG terminal capacity at
Milford Haven in 2007 and 2008 and at the Isle of Grain in 2008.
More interconnectors will come on stream in 2007 and 2008.

The gas situation could be tight between now and then though.
High gas prices—driven by the strong gas price linkage to oil in
Europe and by the lack of competition, meaning that European
suppliers can link purchase contracts to oil—are an issue here too.

This leaves the coal shortfall. Clean coal and carbon sequestration
are options, but unproven options. Coal could be replaced by gas
plants, but while that would reduce emissions, it would not be
enough to meet Kyoto targets.

Conclusion—supporting the three pillars

So with these seismic shifts in the foundations, how can the three
pillars be kept standing? Reconciling the conflicting needs of
these three constituencies is the fundamental truth of energy
policy.

It must be remembered that technology and the workforce have
a key role to play. They are not the pillars, but they serve the
energy sector as opposed to making demands upon it.

For customers in the U.K., security would come through massive
reliance on imported gas. Prices would be lower if a proper
competitive market could be established across Europe that would
not only give customers more choice on price, but also reduce the
wholesale price of gas because suppliers would no longer link the
price of gas with that of oil.

So customers would probably vote for the dash for gas.

The environment would instinctively be best served by an
explosion in renewables. Yet as renewables would mostly be in
the form of wind, there is an environmental negative in terms of
visual impact, as well as the issue of connectivity, with hundreds
of kilometers of new overhead transmission and distribution lines
bringing electricity from remote offshore wind farms across tracts
of often beautiful countryside to where it would be needed.

The environmental lobby would vote for renewables but might be
swayed by the zero-emission option of nuclear new build.

Investors would be attracted to energy options that showed a
return. Investing in cleaner coal plants might prove attractive.

Nuclear new build is economically a somewhat unknown quantity
as of yet, with the disadvantage of being a baseload player and
with the added sting in the tail of decommissioning costs.

On the other hand, if gas prices stabilized but did not fall so far
as to bring about a collapse in the electricity wholesale price,
investment in proven CCGT gas generation might be attractive to
investors.

So the energy mix of the future will change. What is important to
the whole edifice is that energy security is hedged by not relying
on one energy source that could be hit by shortages or price
volatility. Nuclear could yet make a comeback—or be consigned
to history. Technology could yet come to the rescue of coal.

Gas could either be stabilized by a properly working competitive
market or be afflicted by high oil-bound prices and/or an overreliance
on imports from unstable places. And renewables will be
in there, but as an important niche, a part of the mix, but only
ever that.

Meanwhile, supporting the three pillars of energy policy as the
industry examines issues such as the energy mix must be a fully
optimized workforce. Customers, the environment and investors
demand an ever more efficient and optimized utility, and while
they are seemingly different and conflicting forces, the three
pillars are more in line in organizations that use service
optimization technology.

Workforce optimization reduces costs, which can be passed on to
customers in terms of power prices; reduces emissions, which
benefits the environment; and pleases the regulator through
better customer service—all of which will be good news for
investors.

In the end, properly functioning markets and new technology
under the guiding hand of government energy policy and an
achievable environmental agenda will light the way forward and
keep the three pillars of energy policy standing and in proper
alignment.

RAMP: A Unique Solution to Recovering Lost Revenue

Revenue losses are a common
problem among utilities across the
globe and are typically segmented
into three categories:

  • Technical – energy/commodity lost during distribution;
  • Administrative – internal operational issues related to
    under-billing; and
  • Commercial – theft and fraud.

Figure 1: Typical Utility Revenue LossesAcross all three loss types, UtiliPoint (a
leader in providing research to the utilities
industry) estimates revenue lost at
between 3 and 7 percent (see Figure 1).
The following review will focus on
administrative and commercial losses,
which make up 2 to4 percent of total
revenue.

For a typical large North American
utility, commercial and administrative
revenue losses can amount to over $150
million per year.

Most utilities employ small internal revenue
assurance teams that largely focus
on theft and rely on field personnel to
identify and resolve issues. Also, current
revenue assurance processes within utilities
are highly manual and not data-intensive. Employing such an approach leaves
significant money on the table.

Recently, computer applications have
emerged for utilities to purchase that
automate the process of looking for
either unpaid bills or energy theft. This
approach, however, also fails to optimize
the revenue recovery, due to inexperience
with these analytical tools, insufficient
staffing and a utility’s hesitation to
invest in prevention programs.

IBM had developed a Revenue Assurance
Management Program (RAMP) to
address revenue losses in the utilities
industry. The program is comprised of a
proprietary methodology and set of tools
to identify and recover lost revenue for
gas, electric and water utilities. RAMP has
as its foundation a gain-sharing and governance
model that meets the needs of utilities,
with various implementation options.

A unique characteristic of RAMP is that
as opposed to being just an application
that the utility can purchase, the offering
is available as a service to the utility.
RAMP materially improves the identification,
collection and prevention of administrative
and commercial losses across
20 different loss types. The program
includes:

Statistical Analysis of Loss Types

IBM applies sophisticated filters to a
utility’s billing, metering and other transactional
data in order to identify different
loss types including, but not limited to,
theft losses, miscalculated bill due to company error, incorrectly sized meter/regulator
for consumption, and under-registering
meters.

Figure 2: RAMP Capabilities and ServicesAnalytical tools are available through
Itron, an IBM partner, as well as tools
developed by IBM Research (see Figure
2). The tools capture and analyze more
than 125 fields of data, helping to materially
improve the identification of losses.

The analysis tools are complemented
by the experienced IBM-Itron team,
combining decades of industry expertise
with critical analytic skills. The following
capabilities and services are available
with RAMP:

Flexible Identification, Investigation and Recovery Approach
The RAMP approach enables a utility to
collaboratively determine which tasks in
the revenue assurance recovery process
should be completed by which partner.
IBM is flexible in assigning roles and is
prepared to undertake all RAMP tasks.

Enhanced Back-Billing System
Once a revenue assurance lead is confirmed
to be a loss, there is a need to
back-bill the account (subject to any PUC
or utility company guidelines that restrict
the length of back-billing).

Many utilities have a less-than-optimal
method of reconstructing the back-bill,
effectively losing a portion of the potential
revenue recovery. To effectively
back-bill the account, it is necessary to
understand the causative reasons for
the meter under-registration, and then
to use a back-billing approach that recognizes
and responds to those causative
factors.

Typically the causative factors leading
to under-registration fall into one of two
categories: “fixed percentage” of loss and
“variable patterns” of loss. Fixed percentage
losses occur frequently, particularly
for meters that are inaccurate and also
for several theft types. These loss types
exhibit a rather constant, fixed percentage
of loss once the loss starts. As an
example, if tampering disables one phase
of a two-phase meter, and that phase
carries 45 percent of the load (based on
an analysis of the customer site at time
of theft detection), then the re-billing
would need to make up for that missing
45 percent.

A “variable pattern” loss would be
caused by a different kind of theft, e.g.,
a customer who tampers with the meter
one week, but not the next. The key here
is to measure the total load and also look
at periods when no tampering occurred
to determine the appropriate base level
of usage/demand. A seasonally adjusted,
customer-specific base graph is then created
and compared with the tampered
readings to highlight the amount of
under-registration.

Figure 3: Actual Bills of a Specific Customer vs. Its Three-Year Model / Figure: 4 RAMP Solution Yields LeadsOnce the theft type (fixed versus
variable) is known, then a re-billing tool
can recalculate the missing amounts.
This, however, is only one aspect of the
re-billing engine. The re-billing tool must
also recognize other factors, like days
in the month for variable loss types and
whether there was an estimated reading
(estimated readings can distort re-billing,
since an under-billing in the estimated
month can artificially inflate the usage
in the month where the actual reading is
recorded).

The end result is a professional presentation
that helps the utility to improve
the back-billing accuracy (start date and
amount of recovery), facilitating customer
discussions and supporting back-billing
challenges with the PUC or other public
organizations, e.g., for prosecuting theft
(see Figure 3).

Investments to Enhance Utility Loss Prevention
The RAMP review will identify the root
causes of several loss types and help
quantify the loss impact on the utility.
IBM, after utility review, is prepared to
undertake loss prevention projects which
require IBM’s investment when there is
an appropriate anticipated return on
investment.

In the end, RAMP not only identifies
leads for the utility, but it maximizes revenue
recovery via improved billing and
loss prevention.

RAMP is supported by a proprietary
database to enable detailed loss recovery
analysis, combined with a work flow management
system that plans, controls
and supports daily revenue assurance
investigations.

Gain Sharing Provides Unique Advantages to Utilities

An additional, key aspect of the RAMP
solution is the opportunity to provide utilities
with revenue upside through a gainshare
arrangement (see Figure 4). This
arrangement is unique to the industry
and provides potential utilities in a capital-
intensive industry with the opportunity
to recover losses with minimal to no
investment, with reduced risk. The percentage
of gain sharing that goes to the
utility is specific to each contract. However,
the gain-sharing formula is based
on a “due diligence” review of potential
benefits and costs, and it is developed so
that the arrangement provides the utility
with significant revenue recovery over a
potential multi-year contract.

What Utilities Can Gain From Customer Profitability Analysis

Introduction

Recent moves toward a more competitive
retail environment, as well
as increased performance expectations
from shareholders, have focused
utilities on improving the customer experience
and, to some extent, on improving
customer profitability. However, many
utilities continue to have only a very limited
understanding of the unique financial
costs and benefits associated with each
customer. While regulatory requirements
and the societal obligation to provide
basic utility services will certainly continue,
even for unprofitable customers, we
believe utilities need to better understand
profitability at the customer level in order
to improve the customer experience and
contribute to the bottom line.

This article begins with a discussion
of the benefits of utility customer profitability
analysis, then outlines a unique
approach to modeling profitability for
individual customers and customer segments
and provides an example of some
successful profitability analysis results.
All utilities should consider performing
this type of analysis to support their
efforts to improve the customer experience,
increase revenues, reduce costs and
achieve profitable growth.

Value of Customer Profitability Analysis

Customer profitability analysis is most
effective when combined with both demographic
and customer satisfaction data.
The result is a powerful tool that utilities
can use to create new value in four primary
ways.

Know What Is Driving Profit and Loss
This seems very basic, but utilities often
don’t understand what drives their profitability.
As a result, they have difficulty
answering some important questions, such
as: Are sales to residential customers during
weekday peak demand periods resulting
in significant losses? How do repeat
calls from certain customers to address
basic billing questions affect the profitability
of those customers? Should programs
to drive customers toward online billing
programs be expanded, or do those programs
result in a financial loss? Customer
profitability analysis can help answer
these and other important questions.

Earn the Allowed Rate of Return
A surprising number of utilities earn less
than their allowed rate of return for at
least a portion of their customer base.
The reason is that most utilities in this
situation look at their customers as one
uniform group, even though the failure to
achieve the allowed return may, in fact, be
due to low returns for certain customer
groups, geographic areas and/or products
and services. Customer profitability analysis
can reveal those segments with the
lowest returns so that the utility can take
a focused approach to addressing the root
causes and increase the likelihood of earning
the allowed rate of return.

Target the Marketing of New Products and Services
Most businesses selling goods and services
to the mass market perform extensive
market analysis to identify the targets
with the most potential. In contrast,
utilities whose offerings go beyond the
traditional electric and gas service often
possess limited information on which
they must base critical decisions. By
combining customer profitability analysis
with demographic data, utilities can
identify the most promising prospective
customers and more efficiently market
new products and services, resulting in
profitable revenue growth.

Focus Process Improvement and Cost Reduction Efforts
Utilities often use benchmarking to identify
functions with inefficient processes
and excessive costs. Unfortunately many
utilities have discovered an inherent
weakness in benchmark data: It can mask
financial losses driven by the unique
characteristics of each utility’s rate
structures, cost profiles and customer
behaviors. Customer profitability analysis
accounts for these profit drivers to better
pinpoint areas of inefficiency and opportunity.
Traditional benchmarking can also
miss some of the most costly inefficiencies
– those resulting in dissatisfaction
among the utility’s most profitable customers.
When combined with customer
satisfaction surveys, customer profitability
analysis can show which processes
are most troublesome to customers both
driving profits today and most likely to
buy new products in the future.

Taken together, these applications of
customer profitability analysis can provide
substantial value through reductions in
costs, increases in revenue and improvements
in customer satisfaction.

Profitability Analysis Approach

While there are several ways of talking
about customer profitability, we define it
in terms of Gross Margin, as follows:

Customer Gross Margin = Net Revenue
– Cost of Goods Sold – Transaction Costs

In this formula, Customer Gross Margin
is the estimated gross profit for each
customer. Note that this approach does
not take operating costs into account.
This is because a fundamental goal of the
analysis is to understand how customer
behaviors drive profitability and how the
utility can change the way it does business
to enhance customer profitability.
These customer behaviors and utility
actions will not typically drive operating
costs, but operating costs could certainly
be factored in to the calculation to measure
net profit.

Data flow for a customer profitability model leads to in-depth analysis.Net Revenue represents the amount
paid monthly by the customer for utility
services, as derived from actual billing
data, less any refunds or other items that
should be netted, and excluding taxes and
fees that are passed directly through to
the customer. Cost of Goods Sold measures
the cost of the electricity and/or
gas provided to the customer and can be
estimated at the individual customer level
from a combination of customer usage
data and electric and gas market prices.
Transaction Costs include meter-reading
costs, billing costs, demand-management
program payments and related costs,
and call center costs, all of which can be
allocated to each customer based on his
individual activities and attributes.

One key challenge in calculating a customer’s
gross profit is matching energy
usage with market prices to estimate the
cost of goods sold. Ideally, hourly meter
data for each customer’s usage would be
available, but this information actually
remains quite limited. Instead, a reasonable
estimate of energy usage can be
developed by analyzing representative
load curves by day of week and time of
day for various customer types and then
allocating total monthly usage data from
customer billing records into more discrete
time periods. If the representative
load curves are available on an hourly
basis, they can be used to estimate hourly
usage and then hourly prices for energy
can be applied to estimate cost of goods
sold. Alternatively the day can be divided
into periods of, for example, six hours, and
then the average energy price for each
period can be applied to estimate the cost
of goods sold.

The diagram in Figure 1 shows a highlevel
view of the typical data flow for a
customer profitability model. Customer
data is combined with unit cost data in
the analytics engine to calculate gross
margin, and then data on customer demographics
and satisfaction are merged to
populate the database storing the analysis
results and key supporting data. The
results and supporting data can then be
extracted for reporting and further analysis
outside of the model.

Typical Profitability Analysis Results

Customer profitability analysis results can be put into reports for further study.Once the customer profitability model is
developed and the data is merged and
analyzed, the results can be put into
reports to support further analysis. The
most basic report provides information
on the distribution of profitability for all
customers, as shown in Figure 2.

In Figure 2, derived from the actual
results of one electric and gas utility,
roughly 15 percent of customers prove
unprofitable, with the loss reaching up
to about $100 per month for some. This
is not too surprising, given that cross-customer subsidization is well-recognized.
The more interesting and actionable
results are revealed when the unprofitable
and the most profitable customers are
analyzed in detail.

Consider the diagram in Figure 3, which
compares the relationship between the
cost of goods sold and net revenue for
customers with the highest gross margin
versus those generating a loss. The result,
again based on a real electric and gas utility’s
data, demonstrates that transaction
costs are not a substantial factor in driving
profitability for these particular customers.
Instead, for the loss customers,
the high cost of goods sold is the primary
driver of negative gross margin. This suggests
that these customers may be particularly
heavy users of high-cost energy
during peak demand periods. Based on
this analysis, the utility should consider
targeting these customers for some kind
of demand-management program. In fact,
the utility can use the profitability model
to establish the level of demand-management
payments to these customers that
would result in an increase in profits.

This gas and electric utility discovered that the high cost of goods sold is the
primary driver of negative gross margin.In contrast, the high-margin customers
use a greater portion of their energy
during non-peak periods. This result
implies that demand-management payments
to these customers may actually
reduce profits. More detailed analysis of
the usage patterns of these customers is
required, but the customer profitability
model has already uncovered potentially
valuable insights into profit and loss drivers
as well as pointing to opportunities
for improvement.

Conclusion

Customer profitability modeling can provide
valuable insights into the factors that
impact profits and losses within a utility’s
customer base. These insights can, in
turn, drive specific actions to increase
profits by:

  • Changing customer behaviors to
    enhance profits through program and
    policy changes;
  • Focusing process improvements and
    technology investments where they will
    have the most bottom-line benefits; and
  • Targeting new products and services
    to those customers most likely to buy
    them under profitable terms.

Because most utilities already have much
of the required data, the costs of profitability
model development and analysis
are moderate. Therefore, we believe this
analysis easily pays for itself, and should
be considered a fundamental capability
for utility management and planning.