The Benefits of Server Consolidation for Utilities Infrastructure

Although the notion of consolidation is not new, advances in open system
functionality on the part of hardware vendors over the past few years
have paved the way for increasing opportunities to consolidate. The linked
goals of reducing IT complexity, lowering total cost of ownership (TCO),
and providing a better way to access and manage an organization’s information
and resources are inherent to server consolidation initiatives – and are
now more attainable than ever.

For many organizations, consolidation is not just a tactical initiative
but also part of a larger, strategic data center direction. So companies
are using consolidation to control costs and complexity and also combining
consolidation with other infrastructure initiatives to move toward a more
flexible and responsive overall architecture to meet the challenge of
today’s demanding business environment.

The Road To Server Consolidation

There are several different ways to consolidate, from simply bringing
existing systems under common management practice to consolidating many
heterogeneous applications and services on one or a few systems. But all
server consolidation initiatives involve several steps, from high-level
consideration of business goals to tactical planning and timelines. Since
consolidation can be quite complex, many companies start with a pilot
project and then leverage their experience into larger efforts. Some IT
organizations have the resources to perform consolidations in-house, but
most depend on outside service firms for at least portions of the project.

TCO and More

For many companies, the overriding benefit of a server consolidation
project is more-efficient use of resources and lower TCO. New server technologies,
such as domains and dynamic reconfiguration, enable IT managers to allocate
resources dynamically and manage service levels, thereby enabling better
utilization and control of system resources.

By integrating a server consolidation initiative with flexible systems,
IT management can not only reduce the number of hardware and software
platforms it has to deal with but can also apply more standardized procedures
and disciplines to a streamlined and often recentralized environment.
Simplified management leads to improved service levels and uptime for
the organization, which results in more efficient resource utilization.

Although underutilization is common in distributed environments, many
companies also have systems that are taxed to the point that workloads
exceed capacity, leading to degraded response time and even system downtime.
In cases where technicians develop, test, and run production environments
on the same system, these groups can work at cross-purposes, with developers
testing applications by pushing them to the limit while production managers
strive to keep the environment stable.

Server consolidation enables an organization to plan how applications
are combined onto specific systems and to apply consistent management
practices that keep systems and information available. A more effective
computing environment then leads to better skills utilization. With distributed
systems, many users have had to perform double duty as systems administrators
– a job they’re often not prepared to handle effectively and rarely learn
thoroughly. When systems are consolidated and recentralized, an experienced
data center professional can do a much better job of bringing together
multiple disparate platforms and running them as a single seamless environment.

Companies that have consolidated require fewer administrators and managers
and can often re-deploy them from routine maintenance tasks to strategic
development projects. At the same time, a company that is consolidating
its systems has a better chance of justifying the hiring of an expert
staff. A consolidated environment also facilitates proactive, rather than
reactive, management.

Figure 1
Client-server trends

Figure 1

Planning a Strategy

The steps involved in a server consolidation strategy vary from company
to company, but most would agree on the following general outline:

Business Goal Clarification

First, companies need to prioritize their business goals and decide what
they want consolidation to do for them. These goals might include reducing
total cost of ownership, deploying new or modified applications faster
and more effectively, providing accurate and timely information to improve
decision-making, improving user satisfaction, providing better service
at lower cost, and many other objectives that all leading organizations
are striving to reach.

Once business goals are defined, support for technical goals can be determined.
For example, to reduce TCO, companies may want to reduce storage costs
associated with data replication, purchase fewer large systems, and reduce
third-party software costs by supporting fewer versions.

Asset Identification

With the goals clear, the first tactical step in server consolidation
is to inventory the environment. This step can be difficult and time-consuming,
but to achieve the best-possible return on investment, it is critical
to know exactly what hardware – everything from memory modules to storage
devices to servers – is deployed in each consolidation target area as
well as what operating systems and revision levels, databases, custom
software, and packaged applications are in use.

Capacity Study

With resources inventoried, the time has come to determine the right
types and number of servers for a pilot project. The best targets for
consolidation are systems with stable configurations and well-understood
usage and performance characteristics. It’s also important to understand
the operational requirements of the consolidation targets. For example,
high priority should be given to evaluating issues such as downtime, batch
job windows, disaster recovery scenarios, backups, concurrent users, and
response time for each of the systems that will be consolidated.

Proposal and Plan Development

The consolidation proposal and plan document should take the information
gathered from the asset identification and capacity study and recommend
a configuration for the consolidation server. Some proposals, for example,
include a TCO study, developed from a database of acquisition costs, depreciation,
operating costs, and other customer-supplied information that provides
a basis for comparing distinct solutions. In addition, the proposal and
plan should give a detailed timeline for implementation.

Consolidation

Some companies have the in-house expertise to carry out a server consolidation
pilot project from start to finish. Many, if not most, companies, however,
seek out a consulting firm with expertise in enterprise servers and storage,
system software, networking, and database environments. If porting or
special customization is needed, consultants often enlist the support
of outside technical resources.

In addition, a consulting firm should be able to help design reliability
and performance tests for consolidated systems; recommend or offer software
development and system management tools; keep technical personnel current
on consolidation trends; and keep management apprised of the results that
other, similar consolidation customers are experiencing.

Choosing a Platform

In evaluating hardware partners and platforms for a consolidation project,
it is important to take a look at the kinds of technology they offer,
to see whether those technologies mesh with the company’s vision of what
consolidation should ultimately do. Because consolidation means moving
various applications and services onto fewer machines, the platform being
considered should provide strong solutions in the following areas:

Scalability

To have the flexibility to add users, data, and applications easily,
the consolidation platform should be scalable. IS managers with scalable
platforms under their management are able to add processing power, I/O,
memory, and bandwidth, quickly and simply. The server platform should
run a single operating system to reduce application porting problems,
simplify network and system management, and allow the topology to change
quickly as the business changes and grows.

Availability

The server platform should not only provide exceptionally high performance
but also deliver on expectations for uptime and application availability.
With more applications and services residing on fewer systems, it is more
critical than ever that downtime, both planned and unplanned, be kept
to an absolute minimum.

It is necessary to specify systems that have no single point of failure
and to configure redundancy in without adding complexity and cost. The
systems should also enable companies to make changes on the fly – to change
system configurations while keeping applications up and running. Consider
systems that utilize domains, to keep problems in one area of the server
from affecting a mission-critical application in another.

Resource Management

The ability to manage resources effectively is the key to getting the
most out of the consolidation effort. The ideal consolidation platform
allows system administrators to create, manage, and bind processes and
applications onto processor sets within a domain or system. A fair-share
scheduler should be included in the operating environment, to ensure that
each application receives the appropriate amount of system resources.
Companies should seek technology that allows borrowing of resources from
elsewhere on the system when needed and enables management from anywhere
on the network.

The ultimate goal of many companies is to consolidate on a large scale,
perhaps across the enterprise. In most cases, this means some consolidation
projects at the workgroup level, some on midrange servers, and some at
the enterprise level. So working with the vendor with the broadest binary-compatible
product line is essential. The product ine must support interoperability
with Windows NT at the low end and provide the highest levels of flexibility
– along with the technical capabilities discussed previously – at the
mid-range and high end. The vendor should also provide a scalable, robust
operating environment that can support consolidation at all levels.

A critical requirement is to evaluate potential consolidation partners
not only by the technologies and products they have to offer but by the
overall commitment they make to customers as well. Does the company offer
an integrated product line, without hardware and software discontinuities?
Is the vendor committed to protecting its customers’ investment by ensuring
binary compatibility now and in the future? Are attractive trade-in programs
available?

The Politics of Server Consolidation

In today’s global business environment, change is inevitable and ubiquitous,
and server consolidation is about fundamentally changing the way IT is
viewed and managed in a business. The technical issues, although complex,
can often be resolved with a minimum amount of disruption in the day-to-day
workings of an organization. However, the political issues frequently
lead to infighting that can delay and sometimes derail the project. In
light of this, high-level management should become involved in and demonstrate
visible support for the consolidation process from the start.

Many companies report that once the consolidation effort is complete
and users across the enterprise start enjoying these benefits, relationships
between IS and business units actually improve. But to get to that happy
point, the consolidation-team champions need to take responsibility for
positioning the benefits of server consolidation for each group. By keeping
everyone – IS, department heads, rank-and-file users – involved in the
process, companies reap the benefits of wide-ranging input and across-the-board
buy-in. That can go a long way toward making any server consolidation
project a successful one.

E-Business in the Power Industry: Transforming the Competitive Landscape

“You have to breathe all day to stay at your desk, don’t you?”

Jack Welch, CEO General Electric

Global market conditions for utilities are creating entirely new rules
for how companies conduct business and create value. In the past few years,
major shifts such as deregulation, globalization, the integration and
greater efficiency of capital markets, and the dramatic consolidation
and convergence of industries have created a new competitive landscape
for the electric power industry.

Connectivity among market participants – suppliers, customers, and partners
– becomes increasingly important across the entire value chain. The emerging
energy e-marketplace drives lower energy prices; lower cost of operations;
innovative products and services; enhanced business processes and efficiencies;
and increased customer quality and satisfaction.

This white paper surveys the e-business landscape of the power industry
and the many changes that are both taking place and on the horizon. Looking
beyond that horizon, the evolving market is examined to identify what
electric power companies should be doing now if they are to create value
and succeed in the future. “You have to be in e-commerce in every element
of your business, in all of your supply chains, in all of your information
flows, in all of your communications, in all of your customer interactions.
This is not some activity outside the business – this is the business.

E-Business: A New Stage of Development for the Energy Industry

For the power industry, e-business represents one of the most significant
business imperatives in recent memory. Industry leaders expect this phenomenon
to fundamentally change the ways in which the energy sector conducts commerce.
We should expect an industry that makes money on assets to one that focuses
on customers and makes money on information. Wall Street, which has traditionally
overlooked the potential for e-business applications in the power industry,
now places a premium on energy companies that demonstrate e-business strategies
and processes.

Figure 1
How e-business is transforming the power industry

Figure 1

E-Business Demands New Thinking

E-business encompasses more than just selling products and services through
an electronic medium. E-business is about the exploitation of information
networks to gain and systematically leverage competitive advantage.

Revolutionary Impact

E-business forces companies to ask the question, “What business am I
in?”

The advent of Web-based exchanges and private marketplaces for energy
and equipment, and information resources, software, and other energy-related
services disrupts the traditional value chain. Within this transformed
environment, significant opportunities for value creation exist in all
e-business spaces, provided companies account for and leverage the following:

  • Redefining virtually every business process and function
  • Changing conventional concepts and rules about strategic alliances,
    outsourcing, competition, industry specialization, and customer relationships
  • Creating an unprecedented wealth of information about customers
  • Challenging every business to continually reinvent themselves

This revolution causes us to rethink the traditional forms of competitive
analysis.

Figure 2
New competitive forces have far-reaching effects on the power industry

Figure 2

The New Competitive Forces

E-business drives new industry dynamics. Increased pressure on prices
and customer focus impacts the onset of new competitive forces, such as
alliance/partner relationships, existing competition, new offerings and
competitors, connectivity with customers, and connectivity with suppliers.1
Companies need to build specific competencies and overcome numerous challenges
to be successful.

Current business trends show that businesses are re-evaluating their
strategies and organizational structures to maintain a competitive edge
in the new economy.

Connectivity with Customers

The combined forces of increased customer choice and e-business push
power companies to more fully understand and fulfill customer needs. Connectivity
of the customer becomes a critical success factor. E-business provides
a 24×7 window into the customer’s mind, streamlining the customer service
process.

Customers will demand more from their suppliers – the provision of energy
coupled with unique products and services. The Web represents an important
channel for these new revenue sources. Customers will demand customization
and personalization in their Web experience. The Web not only allows for
this, it demands this. Connecting the utility’s processes with those of
the customers creates an additional and compelling reason for customer
loyalty in this increasingly commoditized business.

Connectivity with Suppliers

Connection with key suppliers represents a critical new aspect of the
changing economy. E-markets fundamentally change interactions with suppliers.
Connection through the new electronic marketplace greatly enhances these
relationships in a number of ways. Reduced costs and streamlined processes
are only the first benefits that will be realized. The greatest benefits
will be the new relationships that core suppliers to the industry will
have with the industry as a whole. E-markets will promote and enhance
this concept over a relatively short time.

New Offerings and Competitors

E-business will accelerate the introduction of new products and services
to customers. Through connectivity between customers and suppliers, companies
will be better able to anticipate and rapidly fulfill customers’ needs.
Furthermore, they will be able to shorten the product/ service development
cycle through their connectivity with key suppliers.

Alliance/Partner Relationships

The conduct of Web-enabled business disrupts “traditional” competitive
and friendly relationships and also creates entirely new means of conducting
business. Alliances among competitors are forming in many industries to
take advantage of economies of scale and to extract increased efficiencies
throughout the value chain.

Increased cooperation resulting from alliances enables companies to develop
offers that are not only cost-effective but also improved and innovative.
Many companies are engaging in co-branding and joint marketing to make
offers that cater more to their customers’ needs – an action that each
company alone might not have had the expertise to carry out before.

Existing Competition

E-business has, in many instances, put customers in the driver’s seat
with respect to availability, pricing, and quality of products and services.
It has become imperative that companies have strong marketing capabilities
and brand imaging to maintain and enhance competitiveness. The utilities
industry must move quickly to respond to this emerging trend. Otherwise,
companies ranging from energy service providers to consumer product companies
might enter the market to provide the same or better level of service.

Physical restrictions, such as service territories, pose less of a barrier
for new economy companies that want to enter the market. With deregulation,
customers may choose their energy providers, generating opportunities
for companies with sharp marketing skills and innovative bundles of products
and services.

E-Business Evolution

As commerce increasingly shifts to the Internet, power companies must
develop a keen understanding of their relative participation in Web-enabled
commerce. This understanding provides the framework for developing e-business
strategies necessary to deliver success in the new utilities landscape.

Stages of E-Business Evolution

Based on recent experience, there appear to be four stages that define
e-business evolution: Channel Enhancement, Value Chain Integration, Industry
Transformation, and Convergence. Power companies seeking to harness and
leverage the power of e-business must respond to the new competitive forces
along two paths:

  • Enabling and Transforming: The “enabler” path represents incremental
    change to the organization’s existing model and provides the opportunity
    for “quick wins” in the areas of cost reduction and improved business
    processes.
  • Channel Enhancement: Within the channel enhancement snapshot,
    companies use e-business technology as an enabler to modify existing
    business processes and, in some cases, to create new ones targeted at
    improving business performance. Within this, companies employ e-business
    technology primarily for information sharing and e-commerce – essentially
    establishing a new channel to market.
  • Value Chain Integration: As the level of Internet competence
    and confidence grows, companies search for the next major step in e-business
    leverage-specifically, companies leveraging e-business as a vehicle
    for value chain integration.

In a mature state, value chain integration allows companies to share
real-time planning, cost, and production data between Enterprise Resource
Planning (ERP) systems, thereby allowing for creation of a fully-enabled
“entraprise,” a term used to describe an extended enterprise between the
company and its value chain partner(s).

As part of this analysis, some businesses take steps to seize the advantage
afforded by the low cost of moving data, and to revisit the idea of outsourcing
non-core business processes throughout the value chain. Other industry
players are seeking to transform themselves and, in turn, the industry,
through the radical application of e-business strategies, processes and
implementation methods. This path presents the greatest risk and the potential
for the most rewards.

  • Industry transformation: E-business creates ways for companies
    to maximize shareholder value by completely transforming their industries.
    As the lines between businesses become less pronounced, companies will
    find ways to work together that leverage each other’s core competencies.

The phrase “going to market” will no longer be defined as the way a company
enters the marketplace. Rather, it will characterize the way an integrated
group of companies creates a set of cascading values to transform the
marketplace into a network of value providers.

Companies with a core competence in knowledge management will thrive
in this third snapshot. They will use business partners that have created
best-in-class processes in the physical world and others that build and
run the best value networks to transform the economic base and the operating
mechanics of their industries.

  • Convergence: Convergence is the coming together of companies
    in different industries to provide goods and services to customers.
    It is as much a function of industry deregulation and globalization
    as it is of enabled business models. Convergence is not necessarily
    just an e-business phenomenon. In theory, convergence could occur in
    the complete absence of e-business. However, the continual decline of
    the cost of moving information makes convergence easier and cheaper
    to accomplish.

A supermarket’s offering of retail banking services is one example of
convergence. Others are the emergence of software providers as “infomediaries”
and the coalescing of many services and products in those segments of
the utility industry that are being deregulated. The Internet is fueling
more convergence by providing customers a one-stop shop for all of their
desired products and services. Companies that capture customer loyalty
and that can provide such one-stop shops for a customer market will be
positioned for enormous growth.

Many utility companies are moving from a single company model (e.g.,
an electric utility generates and sells electricity) to a two-company
model (e.g., an entity operates as a generating company and the other
as a sales company). The sales companies compete for customers not only
on the price of the electricity they transmit, but on the other value-added
services they provide (i.e., selling natural gas, connecting customers
to electric (or gas) appliance manufacturers and installers, or aggregating
utility bills into single monthly statements).

Categories of E-Business

There are three core categories of e-business, and each category has
its own set of unique strategic, performance, and technology considerations
that it must consider.

B2B

Analysts predict that 90 percent of e-business will be in this area.
B2B transactions cut capital and process costs, and decrease fulfillment
time. Companies are increasingly examining their core competencies and
are reshaping their own value chains and often the value chain of an entire
industry. Old intermediaries (e.g., wholesalers) disappear and next generations
of intermediaries emerge (e.g., exchanges, auctions, and e-catalogs).

B2C

Many companies have moved beyond the e-tailer phase into the market of
aggregation (e.g., Yahoo!) or Infomediary – the strategic use of customer
information.

B2E

Companies can obtain significant benefits by using intranets for transactions
in functions such as HR and finance. Paperwork decreases, communication
increases, and information can be delivered real time globally.

Figure 3
The B2B vertical marketplace

Figure 3

Emerging Dominance of B2B as Industry Transformer

The appeal of bringing together huge numbers of buyers, sellers, and
customers and reducing transaction costs for all parties will likely generate
the most activity and transformation in the B2Be-commerce space. This
evolving market is rewarding management teams that understand the new
market dynamics and are designing and building new business approaches
and models.

The B2B transformation is forcing major institutions to move quickly
to:

  • Identify non-core processes, both supply/demand chain and back
    office that can be outsourced, so the company can focus on “core” financial,
    brand, and human capital.
  • Develop very dynamic outsourcing markets, beginning with parts
    and other supply purchasing. The goal is to tap into the connectivity
    of the Internet to conduct efficient auctions and transactions and dramatically
    improve the performance of supply chain and back office processes.
  • Create entirely new businesses to manage these new networks
    and disintermediate traditional businesses so that customers have increased
    access to the supply chain and can gain even more rapid response to
    their demands.

What Should Power Companies Be Doing Now?

The successful energy companies will be open to and encourage new business
models and ideas. They will aggressively seek to understand the e-business
drivers within the industry and adopt rapid, decisive strategies to dominate
the industry for the next few years. They will fundamentally change the
culture of their company – from an asset focus to an information focus.

This market demands an entirely new business culture and set of processes.
Companies need to be able to create as many options as possible to attack
the market, to change products, to improve processes, and to restructure
their organizations. At the same time, they need internal business management
approaches that identify the best of these options and rapidly adopt them
to create value.

Figure 4
Issues arise from e-business and competition

Figure 4

In this journey, leading energy companies are developing enterprise-wide
e-business strategies or aggressively pursuing e-business solutions. Some
of the most common value propositions include:

  • Cost Competitiveness: E-business enables aggressive outsourcing
    of internal and customer service-related processes and significantly
    reduces supply chain transaction and material costs. E-procurement expertise
    and technologies will result in reducing the utility’s spend by more
    than 10 percent, reducing transaction costs more than 75 percent, and
    greatly enhancing internal productivity.
  • New Competition: Web upstarts and industry convergences pose
    a significant threat given traditional utility industry characteristics
    (highly regulated, fragmented market, low brand value). The traditional
    utility must focus on a winning strategy comprehensive of branding,
    personalized websites, integration with front-end (CRM) and back-office
    (ERP) operations, etc.
  • Retain and Reach Customers: In the competitive world, the Internet
    will become the main method of interaction with customers requiring
    personalization and effective Customer Relationship Management. This
    is especially critical for those utilities focused on serving large
    Industrial & Commercial customers on a national basis.
  • New Business Ventures: Vertical industry portals and other
    equity ventures are becoming the norm in the market-place, and thereby
    providing utility companies with opportunities to realisze profit from
    new sources.

Recognizing and capitalizing on these value propositions is the first
step in developing a credible and sustainable e-business strategy. The
winning companies are those that create the best set of options and rapidly
adopt them. Winners will also recognize that not every e-business option
will engender immediate financial payback.

Conclusion

E-business represents one of the most significant paradigm shifts in
the energy industry. As the new economy shifts companies from an asset
focus to an information focus, executives need to be focused on several
key messages:

  • E-business changes the way to look at business and is not only about
    technology and a website.
  • E-business needs to be integrated into a company’s overall strategic
    vision.
  • E-business will cause a major shift in a company’s overall infrastructure.
  • Successful e-business transformations need to be done from the top
    of the organization.

Footnote

1 The model described is a PricewaterhouseCoopers’ adaptation of the
well-known Porters Five Forces Model to an E-Business Economy.

The Future of Energy Retailing Trends

A New Taxonomy

The forces of change shaping the energy industry will combine with continued
regulatory pressure to further fragment the industry and bring about a
new energy taxonomy. As the future energy industry fragments, enterprises
will focus on one or more parts of the new value chain (see Figure 1).

Figure 1
The New Energy Industry Taxonomy

An incumbent may choose to perform one or more of the new energy value
chain roles, depending on perceived core competencies, risk appetite,
and preferred market focus (consumer segments served, channel-mix adopted,
products and services offered and regional/national/global concentration).
They will need to unbundle, then restructure, their activities to ensure
single-minded focus on their chosen future business model. The depth of
specialist competencies required to meet consumer expectations and compete
successfully will require incumbents to re-invent themselves in a revolutionary,
rather than evolutionary, manner.

Specialization and Focus

Traditionally a vertically-integrated monopoly, the energy industry will
increasingly expand horizontally as players specialize and focus. The
demands of the new retailing market, combined with the benefits of specializing,
will drive the separation of retailing activities from other parts of
the new value chain. Competition will force those companies choosing to
operate in multiple parts of the value chain to become “best in class”
in each part.

Energy retailing will converge with other forms of retailing and will
become much more competitive as the marketing and Customer Relationship
Management skills of major retailers are brought to bear on the energy
market. Hence, incumbents will need to think about the strategic direction
of unbundled retail activities such as:

  • Front office – Customer Relationship Management and energy product
    building; and

  • Back office – customer contact service provision, billing service
    provision, metering data provision, and meter ownership and maintenance
    (see Figure 1).

In the future, specialist service providers will provide back-office
services to the front-office players. Germany has seen the emergence of
specialist service providers such as Techem (metering and billing) and
Brunata Metrona (district heating, metering, and billing). In the Nordic
market there are, for example, Etrem and Enermet. The former is owned
by inter alia, Fortum, Birka Energi, and Trondheim Energiverk and provides
metering, billing, and data transmission services. The latter, also partly
owned by Fortum, provides meters, terminal units, comprehensive metering,
remote reading, and load management services.

In effect, competition and the benefits of specialization will drive
the separation of retailing and back-office service provision. Some incumbents
may choose to continue to provide both office functions, but, at the very
least, these activities will be performed by separately managed profit
centers within the single corporate structure. Increasingly in the future,
however, they will be outsourced to specialists.

The benefits of specialization coupled with the different cultures, capabilities,
and skills required of a successful retailer compared to those required
to be a successful network service provider, will also inexorably drive
the separation of these two. For example:

  • In Norway, Oslo Energi AS, the largest electricity company in Norway,
    was split into Oslo Energi Holding AS and Viken Energinett AS in 1996.
    The former covers the contestable areas of sales and production with
    about 300,000 customers. In 1999, 49 percent of Oslo Energi AS was
    sold to Sweden’s Vattenfall. The latter company, Viken Energinett,
    covers the regulated monopoly areas of transmission, distribution,
    and district heating and has about 380,000 grid customers. In 1999,
    the company merged with EAB, purchased 12 percent of Akershus Nett,
    and transferred 25 percent ownership to Hafslund – all distributors
    in Norway’s east. Hence, distribution in the eastern region of Norway
    is specializing and consolidating horizontally.

  • In Australia, United Energy, Shell Australia and Woodside Petroleum
    have created Pulse Energy into which United Energy will tip its retail
    arm but continue to run its local electricity and gas networks (AFR
    9/3/2000).

Some incumbents will elect to continue to operate across more than one
part of the value chain. Ever-increasing competition will demand that
these companies become “best in class” in each part of the new value chain
in which they continue to participate. Each line of business may be established
as an independent unit, perhaps even a separate profit center, within
the overall corporate structure. (See Figure 2.)

Figure 2
A Possible Future Integrated Model

Segmentation

Successful future retailers will be proficient at market segmentation
and identification of profitable segments. Retailers choosing to serve
the large end-user segment will provide specialist services tailored to
the needs of geographically dispersed, multi-site customers. In the mass
market, convergence is likely to see new entrants to energy retailing,
including those with strong brands, proven Customer Relationship Management
skills and advanced Internet applications. Partnering will be used to
build coherent sets of offerings. Margins are likely to be low.

Energy retailing embraces a long list of end-user types – large volume
industrial users, commercial buildings, supermarkets, hospitals, small
businesses, government departments, and residential consumers, to name
a few. When other parameters such as demographics, psychographics, and
profitability are also considered, segmentation possibilities seem endless.
At the highest level, however, some retailers will focus on large industrial
and commercial consumers and others on the mass market.

Large Industrial and Commercial Consumers

For these end users, energy as part of their cost of goods sold ranges
from just a few percent to well into the 10 to 20 percent range. Wherever
consumers may sit in this range, the sourcing of cheaper energy that meets
their security and quality needs is just too important to their own competitiveness
and profits to procrastinate or hold to traditional loyalties.

In this segment, the popular practice of engaging an energy-purchasing
specialist and using a tendering process – potentially e-enabled – can
promote a largely price-driven approach to energy purchasing.

Specialist services will be required to meet the diverse needs of these
consumers. Some larger industrial sites will have access to on-site or
co-site generation. Others possess the infrastructure for dual fuel capability.
Some are moving to purchasing process steam from an outsourced facilities
manager rather than purchase source fuel themselves.

For risk management reasons, generators and energy traders will also
want to retail to this group. For example, generators in New Zealand and
the U.K. have been reconstructing the energy value chain by buying retail
businesses.

Retailers choosing to operate in this business-to-business market segment
will develop national, international, or global footprints and provide
energy products and energy management services customized to individual
customers. They will be able to aggregate the energy needs of companies
with multiple, geographically dispersed sites and will provide specialized
services such as consolidated billing, dual fuel billing, risk mitigation
products and consulting services to improve energy efficiency. They are
likely to acquire the largest consumers of this segment by providing energy
products and services beyond the capabilities of the smaller incumbent
retailers.

In Germany, Euro-power has focused on this market segment. VASA, a joint
venture between Vattenfall and a German investor, has focused on supplying
municipalities with electricity for on-selling. HEW pursues the aggregated
loads of multi-site consumers and buying groups.

Mass Market Consumers

A range of new players is likely to enter the mass-market energy-retailing
scene. Incumbents will compete against major brand “bricks and mortar”
retailers from other markets and also against e-tailers. Examples of potential
national retailers in the U.K. include Tesco and Sainsbury and in Australia,
Coles Myer, and Woolworths. Global examples include Shell as a major brand
competitor and Yahoo!, Amazon, and AOL as Web-based retailers. These new
entrants will compete on value, brand equity, and the ability to deliver
through channels of choice.

Partnering, in the form of joint ventures or strategic alliances, is
likely to be a popular strategy. Very few retailers, if any, will have
the ability to compete successfully across all the products and services
of future market spaces. Examples of potential horizon-tally-integrated
mass market retailers offering coherent product and service bundles include:

  • Utility home products and services com-panies (energy, telecom, home
    security);

  • Broad energy companies (energy, gasoline, credit cards, loyalty programs,
    fast food);

  • Financial institutions (energy, credit cards, financial products,
    retail products, telecom, banking); and

  • Major brand retailers (energy, gasoline, loyalty programs, financial
    products, retail products, telecom, banking).

For risk management reasons, generators will also enter mass-market retailing.
In Germany, PreussenElektra and Bayernwerk now retail to domestic consumers.
Before deregulation, they wholesaled to regional and municipal companies.
PreussenElektra has launched ElektraDirekt to focus on domestic consumers.
Bayerwerk has introduced a family of products (PowerPrivate, PowerFamily,
AquaPower, and SunPower) targeted directly at residential consumers.

The gross margin earned in the mass market is likely to be low, particularly
where there is an over-supply situation, due to intense competition to
win and retain customers. Some new entrant retailers may offer energy
products at a loss to aggressively acquire new customers and at the same
time build a stronger, wider relationship with their current customers.
Significant economies of scale accrue to companies like EdF and Centrica
who already have customer bases numbering tens of millions of households.
Leveraging scale and scope to drive down costs to serve, costs to acquire,
and costs to retain, as well as for competitive advantage and differentiation,
will be critical success factors.

Unlike many incumbents, today’s successful major mass marketers have
already mastered a range of essential competencies. These include:

  • Strong brands supported by effective marketing to large customer
    bases;

  • Marketing strategies targeted to sharply defined key segments;

  • IT systems, revenue management, and customer service capabilities
    that are cost-effective and flexible;

  • Coherent product bundles, service offers, and pricing delivered with
    the help of a range of partners;

  • Exceptional value propositions;

  • Flexibility and responsiveness;

  • Strong customer touchpoints; and

  • Increasing provision of self-fulfillment functionality for customers.

Existing energy retailers may not have the skills and capabilities necessary
to “go it alone.” They will, however, be sought after as alliance partners
by new entrants.

Figure 3, “The Future Energy Retailing Industry Structure,” maps the
new market structure that is likely to emerge. A range of new retailers
will join incumbents. Traditional channels will give ground to e-tailing,
partnerships with other brands, affiliation marketing, brokers, and direct
sales.

Figure 3
The Future Energy Retailing Industry Structure

 

The Rise of E-tailing

Retailing barriers to entry are being brought tumbling down by the Internet
and new Web-enabled technologies.

The Internet enables carefully targeted communications and, when combined
with e-business, provides the platform for serving markets of one. Dynamic
customer segmentation is easier and faster. Products and services may
be modified, rearranged, and tailored at will. Innovation is almost instantaneous.
Far more customer information can be gathered and analyzed in a shorter
time. New energy retailing processes, systems, behaviors, and values are
needed in this environment, and they are vastly different to traditional
requirements. The adjustment will not be easy, and partnering may be the
optimumsolution for many incumbents to succeed as e-tailers.

Future e-tailers may be:

  • Incumbents who have reinvented themselves through specialization
    and focus. In Australia, AGL has flagged their intention to develop
    their Internet and e-business capabilities to leverage their expanding
    customer base in Australia and New Zealand (AFR 3/3/2000).

  • New entrants with well established brands. The Australian Centre
    for Retail Studies states that almost 90 percent of retailers with
    revenues of more than $600 million claim to have an e-business strategy
    (AFR 26/11/99).

  • From other industries developing strong Internet competencies, for
    example banks and financial institutions.

  • New, smaller enterprises with an effective website and a new way
    to sign up customers, for example, enermetrix.com and nexusenergy.com
    in the U.S. enermetrix.com offers savings on electricity and gas purchases
    of between 10 percent and 20 percent. The company helps customers
    define their energy contract requirements, and then over 40 suppliers
    compete in an auction for the load. enermetrix.com staff manages the
    contract documents and the purchase on behalf of customers. nexusenergy.com
    effectively extends wholesale prices into the retail market by aggregating
    mass-market customers up to wholesale suppliers.

E-tailers will race to attract and keep customers and, in the absence
of differentiation, discounting will be a powerful weapon in the battle
to win customers. The Internet enables marketing, sales, and transaction
costs to be significantly reduced, thus changing the traditional cost
effectiveness equations associated with acquiring and retaining customers.
There is the very real possibility that e-tailers will use energy as a
loss leader. At the very least, they will encourage consumers to shop
around. The attraction of lower prices is powerful and can significantly
lessen the strength of brands and customer loyalty among price-sensitive
consumers. Under these circumstances owning the customer relationship
is difficult.

Looking beyond the immediate horizon, the rise of the Internet could
encourage the emergence of specialist energy traders appealing direct
to large end users with a single-minded lowest price offer, thus competing
directly with retailers. The Internet will also simplify the process of
forming and transacting with buying groups, especially those with large
numbers. The low cost platform of e-business will be especially effective
with residential and small business buying groups because the cost of
acquiring these low volume consumers is usually a barrier to competition.

In Conclusion

The future for today’s energy retailers will be very different. All will
face strong cost pressures and the prospect of low returns. They will
focus more on segmentation and implement marketing strategies tailored
to segments.

The mass market will segment into Internet users, other high-income consumers,
and the fuel poor. Competitive pressures from new entrants will be felt
as multi-product portals serve Internet customers, major brand retailers
compete for the other high income “cherries” and the fuel poor continue
to attract regulatory controls. Retailers serving the large industrial
and commercial segment will operate nationally and provide energy product
and service value propositions tailored to individual multi-site customers.

Retailing activities will separate from back-office service provision
as well as from network service provision. Retail returns on electricity
will probably be low, particularly if used as a loss leader by multi-product
new entrants looking to maximize the lifetime value of their customers.

New forms of energy retailing, leveraging broader channel mixes, are
being created. Together with ongoing change this will create a risky business
environment. To be successful, energy retailers should develop capabilities
to:

  • Gain a deep understanding of consumer needs as the basis for dynamic
    segmentation;

  • Select profitable target markets and their preferred channel mix;

  • Go to market with unique value propositions that create consumer
    expectations and shape end user needs;

  • Manage service delivery to meet customer needs;

  • Provide the infrastructure to cost-effectively support those needs;

  • Earn a profit by fulfilling those needs.

Smart Metering Provides Competitive Advantages To Energy Service Providers

Introduction

For many years, meters had served only to measure the amount of kilowatts
per hour being consumed, and the purpose of this measurement was simply
to generate a bill. That time has passed, however. Today’s “smart” meters
serve as gateways into better understanding and serving the energy consumer,
more efficiently operating a transmission and distribution network, and
as effective tools to bring value-added data for competitive advantages
to energy service providers.

The business and operations function of meters has expanded greatly in
recent years, from serving simply as the cash registers of a utility to
serving as the communication kiosk between the energy service provider
and its customer. Electronics, communication and, most of all – connectivity
– are today turning the metering business into a cost optimizing, customer
relations-improving, and transaction-enabling business.

Progressive utilities are discovering that the intelligence gathered
through smart metering can provide a competitive advantage when actively
used to gain operations efficiencies and accountability. They are finding
that smart meters can collectively serve as a powerful marketing and branding
tool as well.

Electronics

The migration of metering technology from electromechanical to electronic
was the first wave of impact on meters’ journey to becoming “smart.”

Electronics have progressed to the point where meters can now measure
the actual quality of service being provided to the end consumer. The
device has become like a video camera, recording the kind of energy present
at the site, the number of outages, the harmonics present, the phases
that are active, etc. The device has even become a watchdog for utilities,
instantly reporting when the meter has been installed incorrectly or is
being tampered with. All of these capabilities, and more, are available
today in most electronic meters.

Communication

The next frontier for metering was adding a communication component.

It started years ago with a simple telephone modem, an electronic device
that enables exchange of data over a telephone between two computers.
But as the communication industry evolved, so did the communication capability
for meters.

Although radio and telephone are the most common communications systems
in use, today’s meters can also communicate via numerous other methods,
including TV cable and satellite. Meters can even now provide two-way
communication over AC power lines, transmitting customer usage data directly
over power lines to a wide area network gateway.

But clearly the fastest growing segment of the “smart” metering world
has been the use of radio.

Today, wireless fixed networks represent the next major wave in how utilities
will be utilizing metering and information. Wireless fixed networks allow
for remote, off-site meter reading, real-time and demand metering for
domestic commercial/industrial customers, online customer outage and power
restoration information, and more.

But before the first two components, electronics and communication, could
help meters serve as gateways into better understanding and serving the
energy consumer and provide competitive advantages to energy service providers,
they required a third, critical component – “connectivity.”

Connectivity

The final frontier to obtaining truly smart metering requires connectivity,
and this could not have come about without initial significant developments
in electronics and communication. In essence, connectivity is metering
data strategically used among the various systems within a utility’s customer
relations – residential, commercial and industrial – and within the distribution
system, to significantly improve efficiencies and provide system optimization.

The world saw connectivity come alive with the Internet. Previously,
there were stand alone computers and there was communication, but the
sum of the parts become larger than the whole when there was connectivity.
Here, connectivity is not only that connection of an individual computer
to a centralized database or a mainframe, but the connectivity of one
individual computer to another individual computer worlds apart.

It is through this complete connectivity of metering data – throughout
the utility organization as well as with the customers – that the concept
of “smart metering and metering data” takes on a form of intelligence.
Connectivity uses electronics and communication to link data and technologies
to provide faster service and new conveniences, such as real-time communications
with customers, providing customers with more choices, and real-time pricing.
It is also significantly improving transmission and distribution efficiencies
by adding ready access to essential system data, providing “snapshots”
of real-time conductions and the ability to make system adjustments remotely.

It is not sufficient to have a stand-alone meter, no matter how smart
it is. It will no longer be sufficient to have an automatic meter reading
system operating in isolation, either. In order to truly bring the power
of knowing and serving customers, energy service professionals need connectivity
to their customers throughout their entire organization.

Connectivity through “smart” metering is revolutionizing the relationship
utilities have with their customers. Consider the impact of increasing
the connectivity of the metering data. For instance, a customer calls
in with an energy bill complaint. The customer service rep pulls up the
customer account with their CIS system. The data provides the address,
the last bill, and the customer’s name. The customer service rep then
pulls in a real-time reading on the customer’s meter. The customer service
rep sees the profile and usage pattern of the account’s last month’s usage,
and looks at the power outage history. The conversation between the account
rep and the customer may go something like this:

Customer: “My bill is 30 percent higher than last month’s,
there must be an error – your reader made a mistake or my meter is running
fast.”

Customer Service Rep: “Perhaps, but let us look at your
account right now, I show your bill was $50 last month and this month
it is $80, where as last year in this month it was only $60.”

Customer: “That’s right, and so I know it is a mistake.”

Customer Service Rep: “Let’s look at a little more detail.
We show the average temperature last year at this month was 80 degrees,
whereas this year the average is 90 degrees. It appears that you had a
large usage on last Saturday and Sunday, did you have some guests?”

Customer: “Oh yes, that’s right, it was my daughter’s baptism
and we had all the relatives in for the weekend – boy was it hot.”

Customer service rep: “Yes, we show a record of 100
degrees for that weekend, I also see that this next month’s bill seems
to be on track to be back around $60.”

Customer: “Thank you, I guess it sure adds up.”

Customer Service Rep: “We have several programs that
you may benefit from, such as our hourly spot pricing program. For example,
when you had your party Sunday morning, we actually had lower generation
supply rates. Our calculations show that if you were on our spot pricing
rate, your bill would have been $65 rather than $80.”

Now look at the alternative. The same customer calls up but his customer
service rep doesn’t have connectivity:

Customer: “My bill is 30 percent higher than last month’s,
so there must be an error – your reader made a mistake or my meter is
running fast.”

Customer Service Rep: “Perhaps, could it have been because
of the weather, summer months are hotter.”

Customer: “My bill last year at this same time was only $60.”

Customer Service Rep: “Well, we do show that this appears
to be quite a jump, we’ll have someone come out and look at your meter.”

Customer: “When are they going to do that? My meter is inside
and I sure don’t feel like hanging around here all day waiting for someone
to check my meter, why don’t you just credit me $20 and when they come
the next month to read it again, they can check it then? – in fact I’ve
gotten a couple of offers to switch my provider, I think I may go ahead
and try someone else,”and so on.

Here the utility likely has still left the customer unsatisfied and has
to add additional costs to verify the customer’s meter accuracy. And who
knows when the customer will actually be reassured that nothing is wrong
with the meter? In the first case, because of the instant usage of the
metering data connected to all the other customer information, the customer
service rep was able to turn what started out as an angry customer into
an informed and satisfied consumer.

Power Quality and Outages

Another area being explored through connectivity is the ability to allow
a utility to be proactive about its service. Meters today have the capability
of notifying the system of a power outage. When this data is directly
connected to an internal power outage system, a crew can be automatically
and efficiently dispatched to correct the problem. And before the crew
leaves the area, meters can automatically report back that the power has
been restored, without any customer inconvenience. No more having to rely
on customers to call in to notify the utility of the outage. No more unknowns
of whether the fixed line fixed all of the customers’ problems. The ultimate
customer service occurs when a problem is corrected before the customer
even knows there was a problem.

The capability of “connected” smart metering data has an even larger
impact on a utility’s commercial and industrial customers. It is possible
today for the utility to notify their number one industrial customer that
they may want to look at maintenance on their equipment line because,
based on the harmonics that are being generated, he is likely to have
a costly failure in the near future. Connectivity can allow utilities
to not only implement real-time pricing schemes of energy usage, but also
to automatically obtain verification of forecasted voluntary load curtailment
and instant monitoring of the actual curtailment.

Conclusion

Meter data started out being static information. With deregulation, utility
revenue is changing, becoming dependent on a value-based structure that
is, in turn, driving the need for active data.

The process of collecting metering data needs to be an active, value-driven,
Customer Relationship Management and transaction service. The meter is
now part of an overall smart metering data service that reduces the time
and manner of executing transactions, improving operating efficiencies
and improving customer relationships. This can be done when detailed meter
data is communicated in real time, not only to the billing department,
but customer service departments, operations, marketing, rates, and sales
departments. True “smart” metering is when there is meter data connected
and utilized throughout all operations and departments within the utility.

“Smart” metering can today serve as an asset and a sound investment,
rather than a necessary evil relegated to that specialized area within
the utility organization called “the metering department.” Through electronics,
communication and, most of all, connectivity, meters today are lowering
costs, improving efficiencies, and opening up new services and ways for
utilities to relate to their customers.

Customer Interaction in the Digital Age: Strategies for Improving Satisfaction and Loyalty


Introduction

The rapid growth and evolution of the Internet as a customer contact
channel has had profound implications on the way traditional companies
conduct business. Today, those who haven’t turned their “bricks-and-mortar”
business into a “clicks-and-mortar” operation are steadfastly losing customers
to those who have.

Why? Quite simply, because we live in an era of convenience. Now that
most companies are leveraging the Internet as a sales channel, there is
a realization that the ability to succeed hinges largely on the ability
to properly support one’s online customers.

These customers have many choices.If a website is difficult to navigate,
lacking in features, or is unresponsive to customer needs, it will not
only be underutilized, it will be abandoned. It is far too easy for an
online customer to try the competition’s products or services if one cannot
find what they need from the first site they select.

Facing Challenges

What does this mean for the utility industry? As energy providers across
the U.S. begin facing competition as a result of deregulation, more than
ever, call centers within utility companies need to provide exemplary
service to meet customers’ high expectations. Soon, utility companies
will be faced with many of the same challenges as today’s leading retailers
or telecommunications companies – the challenge of attracting and retaining
customers. With this in mind, the utility sector needs to be prepared
for an increased demand for assistance on their websites.

In preparation for this shift to the Web, utility companies more than
likely already have the most important building block in place – the call
center. However, there are critical steps to extending the call center’s
reach beyond the telephone – and that is developing a strategy for transforming
the call center into a complete customer interaction center.

The Reality of Customer Loyalty

With the utility industry facing deregulation, electric and natural gas
companies will experience an increased need for Customer Relationship
Management (CRM) to help retain current customers and recruit new ones.

Case in point: According to J.D. Power and Associates Cross-Industry
Call Center/Customer Satisfaction Report, consumers are twice as likely
to switch service providers when offered a discount unless they receive
superior customer service1. The report also found that special promotional
offers, especially those based on price, could reduce consumers’ willingness
to remain loyal to companies with which they do business.

The good news is, utility companies are in a strong position to retain
existing customers. Another study by J.D. Power and Associates/Navigant
Consulting Inc. cited that 65 percent of residential customers are extremely
or very satisfied with their electric utility provider2.

Quality of service is critical in building and maintaining customer loyalty.
However, as competition and consolidation increase, some companies will
be left behind as others march aggressively ahead.

Transforming the Traditional Call Center into a Full-Service Customer
Interaction Center

Providing quality customer service is the lifeblood of a company’s call
center. However, the emergence of the Internet as a new channel for customer
contact has given rise to a whole new level of customer contact within
many organizations. In addition to customer service, the Internet allows
companies to manage activities such as order processing, account management,
statement presen-tation, bill payment, and frequently asked questions
(FAQs) about products and services.

 

Figure 1
APAC Customer Services’ fully interactive voice and data environment
See
larger image

Figure 1

The prospect of building Internet capabilities into the call center is
daunting for many companies. For this reason, many choose to outsource
their customer care functions – both traditional phone-based services
and Web-based support.

Current investments in the call center – such as Customer Relationship
Management (CRM) and Computer Telephony Integration (CTI) – can provide
the same benefits to Internet interactions as they do for inbound and
outbound telephone calls. Companies can expand their call center operations
by either purchasing Web-based functionality and integrating it into the
existing infrastructure or by utilizing solutions from an Application
Service Provider (ASP) specializing in Web-based customer service software.

Before going into detail on the various Internet-based communications
channels, it is important to understand the CRM and CTI capabilities that
are instrumental in today’s call center operations.

Customer Relationship Management (CRM)

Many call centers use CRM software to manage customer interactions. CRM
applications can include capabilities such as sales force automation (SFA),
help desk, technical support, relationship management, and others. Each
of these applications revolves around a workflow process that is normally
tailored to the specifics of the business. Usually, CRM applications implement
CTI capabilities, giving them the ability to “pop” the appropriate screen,
based on the customer’s phone number or the information collected from
an Interactive Voice Response (IVR) system. The screen that is displayed
on the customer service representative’s (CSR) desktop gives them the
appropriate set of customer information, as well as the application capabilities
required to complete the customer service interaction.

Again, the capabilities of existing CRM applications can be leveraged
for Internet-based communication. A CSR should be presented with the same
information and tools for Web-based interactions as would be presented
for telephone calls. This eliminates the need for a representative to
undergo costly training on different systems.

Computer Telephony Integration (CTI)

The telephone plays a significant role in the Internet-enabled customer
interaction center. Therefore, it stands to reason that CTI systems will
also play a role. The most common CTI application is known as “screen
pop,” which delivers the basic capability of displaying customer contact
data on a CSR’s desktop simultaneous to the arrival of the customer’s
telephone call. The basic premise of a “screen pop” is to give the representative
the information they need in order to complete a customer service transaction,
avoiding the inefficiencies and frustrations of having to repeatedly ask
a customer for the same information. This holds true for Internet-based
customer service requests. For example, if an email requires a response,
the representative should be provided with the same “screen pop” of customer
contact history and information that would appear for an incoming telephone
call.

Multichannel Communications for the Customer Interaction Center

Many Web-based communication channels present opportunities to achieve
greater levels of service and to reduce costs. For example, email is an
asynchronous communication medium, meaning that a customer is not actively
waiting for a response. This time interval presents opportunities to provide
a more informed and appropriate response.

A Web-based text “chat” session, which can be considered “near synchronous”
(i.e., an answer to a chat question is expected within a minute or two)
also presents opportunities. Even the action of handling multiple media
types within a single customer service organization presents opportunities,
as indicated by:

Content Analysis

In the case of email and chat, it may be valuable to perform sophisticated
content analysis before an incoming message is presented to a CSR. This
may include a knowledge base search, which would attempt to identify similar
questions for which a reply has already been created and could therefore
be reused. It may also provide the opportunity to analyze messages waiting
in a queue for detection of similarities so that a response can be created
and then sent in bulk. This is common when a customer base, as a whole,
experiences an event such as the roll-out of a new service or a special
promotion. This form of content analysis is not generally applicable in
the case of a telephone call, where a customer is actively waiting on
the other end of the line.

Blending Media Types

The “blending” of multiple media types also presents opportunities for
greater service and reduced costs. Media-blended environments, coupled
with the asynchronous nature of email, will increase the efficiency and
effectiveness of a customer interaction center while achieving greater
economies of scale. For example, since it is typically acceptable to allow
longer wait times for email replies, gaps in telephone traffic can be
filled by having CSRs respond to email inquiries. If you consider a call
center comprised of hundreds of agents who experience an average idle
time (or time spent in a “ready” state) of 10 percent, the payoff is significant.
Even if only half of the agents, in a 200-person call center, are skilled
to handle email, you will essentially have 10 CSRs dedicated to email.
This is significant when you consider the cost of hiring, training and
retaining 10 agents.

Proactive Support and Cross Selling

The Internet provides a great forum for proactive customer service, up-selling
and cross-selling. For example, if an important customer has submitted
an improperly completed form multiple times, a notification can be sent
to a representative. This would give the customer interaction center the
opportunity to proactively contact the customer while they are still on
the website and possibly still experiencing difficulties. While this may
sound intrusive, there are a number of ways to ask the customer if he
or she would like to interact with a live agent before actually contacting
them. If a customer has exhibited a certain behavior on the website (i.e.,
remaining on a particular page for a long period of time or looking at
a series of related pages), a notification can be sent to a representative
or intelligent software program. This would give the CSR the equivalent
capability of being able to “walk up” to a customer and ask, “May I help
you?”

Which Communication Channels are Right for Your Customer Interaction
Center?

Offering the customer a choice is key. Some might feel that Web-based
text chat is the best form of communication between the customer and CSR,
since it does not require special hardware or software to be installed
on the customer’s computer. Others believe Voice Over Internet Protocol
(VOIP) is preferable because real-time communication by voice is more
natural and doesn’t require a second phone line. The telephone and email
are certainly the most widely used form of communication between agent
and customer. The correct answer is to let the customer choose.

Telephone

The telephone – including toll-free services, IVR, and outbound solicitation
– has been at the forefront of customer communications for the past 20
years. Therefore, when thinking about Internet communication channels,
one must not forget the telephone, as it continues to be one of the most
heavily used forms of communication. But patterns are shifting, and by
2003, more than 70 percent of households will be online3, with research
supporting that the majority of customer service inquiries will migrate
to the Internet as well. However, it is still important to consider how
an Internet customer can effectively use the telephone to communicate
with a CSR.

For example, most websites provide at least a customer support telephone
number. More advanced sites have “call-me-back” capabilities, giving the
customer the opportunity to request a phone call from the customer interaction
center. However, if you consider the power of an Interactive Web Response
(IWR) system, in-depth information can be collected from an online customer
much like an IVR system. When a “call-me-back” is requested, the customer
interaction center can then assign the most appropriate agent to place
the phone call. Statistics such as position in queue and expected wait
time can be provided to give the customer a visual representation of their
“virtual” call, as well as provide them with information on when they
can expect to receive the phone call. “Call-me-back” capabilities can
also be designed to give the customer the option of scheduling a phone
call for a later time.

Email Response

Email is certainly the most practical and common form of communication
on the Internet today. The growth of email can be attributed to the fact
that it is easy to use and extremely convenient – no special hardware
is required, and in many cases, no special software, just a browser.

According to Forrester Research, 71 percent of consumers using customer
service now turn to email to resolve their issues, while 51 percent use
the telephone4. Not only are these two channels used most often, they
are also the most preferred.

As a result, Internet sites are starting to receive email by the thousands
on a daily basis. Customers are realizing that it is more convenient to
write a quick email than to sort through an IVR, wait in queue, and deal
with potential transfers between CSRs – often common of a telephone-based
experience. Furthermore, email can be sent 24 hours a day, seven days
a week.

From a customer perspective, sending an email puts more responsibility
on the customer interaction center to properly manage it behind the scenes.
In 1999, Jupiter Communications reported that an astounding 46 percent
of emailed customer service inquiries went unanswered or were not responded
to within five days5. As companies begin to leverage the same technologies
and processes that have been established and perfected within their call
centers, these numbers will improve. If nothing else, the simple fact
of competition will drive them to support customer email.

Shared Browsing and Application/Form Sharing

Shared browsing can be considered the most basic form of application
and form sharing. It gives the customer and the CSR an opportunity to
co-navigate a website, so that they both see the same Web page at the
same time, regardless of who clicks on a link. This gives the CSR the
ability to “guide” a customer through the website. Shared browsing, in
its simplest form, is a relatively lightweight feature – a Java applet
or ActiveX control is dynamically loaded when the customer accesses the
customer service portion of the Internet site. In a more sophisticated
form, shared browsing can allow a CSR to actually enter data into a Web
page on behalf of the customer, though this is more complex in terms of
security and performance.

Sharing HTML-based forms extends the capabilities of shared browsing
by allowing not only sharing of Web pages but by also allowing a customer
and CSR to interactively complete HTML forms. The CSR sees what the customer
types; the customer sees what the CSR types. The two can work collaboratively
to accomplish the task at hand.

Voice Over the Internet Protocol (VOIP)

One of the primary benefits of VOIP is that, in the case of Web-based
customer service, it removes the need for a customer to disconnect their
Internet connection to make a call. It also has cost saving implications
since a public switched telephone network (PSTN) phone call is never required.
However, there are issues with VOIP. First, it requires an Internet customer
to have a VOIP-enabled application, speakers, and a microphone installed
on their computer. Secondly, the loss of fidelity and synchronization
due to the necessity of voice compression and network hops lowers the
quality of VOIP to a level that may not be acceptable for some customers.
Nonetheless, VOIP, which is a technology still in its infancy, will only
continue to improve and increase in popularity.

Though proprietary protocols exist, most VOIP applications implement
H.323, a standard for real-time multimedia communications and conferencing
over IP and ISDN networks. This allows customers to use any H.323 compliant
applications such as Microsoft NetMeeting and Netscape Collabra as VOIP
applications.

Web-Based Text Chat

Web-based text chat, also known as “text conferencing,” is important
considering that not everyone has two phone lines or a computer equipped
with a microphone and speakers. If both of these conditions are true,
a user must disconnect from the Internet in order to speak to a CSR, and
this can be a hassle. If they choose to communicate with a CSR via chat,
however, they are able to communicate in near real-time for quick resolution.

The basic functionality of chat is to allow a customer to ask questions
via their browser and view text-based responses from a CSR in near real-time.
Similar to the case of handling email, chat also gives the customer interaction
center the opportunity to invoke text analysis capabilities. Suggested
responses can be automatically inserted into a chat session, alleviating
the agent from answering repetitive questions. Scripts can be executed
within a chat session, automatically helping a customer through a series
of questions that will ultimately help the CSR deliver better service.

Similar to shared browsing, Web-based text chat comes in many forms.
Chat technology can consist of an HTML form, a Java applet, a browser
plug-in, or a stand-alone application. Standards such as H.323 also support
chat, enabling interoperability between H.323 compliant applications.
If chat is to be supported within a customer service environment, it is
most practical to support a range of approaches, enabling customers with
less sophisticated chat capabilities installed on their computers to communicate
with CSRs.

Is Your Website Customer-Friendly?

Having the right communication channels in place is futile if customers
aren’t aware of your website, or if the are not using it to its fullest.
Attracting customers to the site is of paramount importance. Ultimately,
a strong website will help build relationships with customers and empower
them to accomplish their goals at a time and place that is convenient
for them. Also, a strong website will off-load CSRs and provide additional
up-selling and cross-selling opportunities for the company. A useful,
customer-friendly site will keep customers from turning to a competitor,
thereby improving customer loyalty.

What makes a website successful? It is a combination of things, including:

  • A visually stimulating and intuitive interface

  • Customer-empowering functionality

  • Access to valuable and timely information

  • The provision for assistance

A talented graphic artist can create the right look and feel, but it’s
a good IT manager and customer service manager who will integrate the
right capabilities into the company’s website to empower the customer.
Here are several levels of functionality to consider:

Self Service

This is the backbone of e-commerce – the ability for an online customer
to easily purchase a product or service. Usually, self service is very
specific to the business behind the website, requiring integration with
back-office applications. However, off-the-shelf technologies from a host
of e-commerce vendors offer functionality such as electronic payment,
profiling, publishing, cataloging, legacy integration, and membership.

Self Help

Self help allows customers to answer their own questions without interaction
with a CSR. In its most basic form, self help can simply be a list of
Frequently Asked Questions (FAQs). In a more advanced form, self help
can provide context-sensitive help, search a knowledge base, or even detect
user problems and automatically suggest solutions. Providing self help
capabilities can eliminate the frustration that users experience when
they’re unclear about how to accomplish a task or they simply get stuck
with a question.

Assisted Help

No matter how useful the self help capability, there will always be cases
where a customer will need human assistance. And the more choices they
have in how to achieve that assistance, the better. In the case of choosing
an energy provider, the customer is making a choice that is of high value
to them; therefore, it is desirable to have highly trained CSRs to assist
with their inquiries. Assisted help takes on many of the different channels
mentioned above – phone, email, chat, VOIP, shared browsing, form sharing,
etc.

Interactive Web Response

A website can have functionality similar to an IVR, which is typically
used to collect information about a voice caller or provide basic self-service
functionality. If an online customer requires assistance from a CSR, the
Web pages that the customer has visited as well as any other essential
information can be collected and given to the agent as “attached data.”
Furthermore, this information can be used to determine the skills required
to adequately address the customer’s inquiry. Due to its similarity to
an IVR, this capability has been coined IWR – Interactive Web Response.
Though, in essence, an IWR embodies the entire functionality of the website,
the process of collecting data relevant to the customer’s inquiry is the
one that is most directly analogous to its voice counterpart, the IVR.

Making the Leap

The customer interaction center transcends the traditional call center
and offers communications through some or all of these media. It is not
just about taking phone calls or answering email. It is about communicating
with customers in the form they choose while making a consistent, lasting
impression.

Recognizing that in most cases major change is difficult to achieve instantaneously,
it is important to consider the foundation required to support the migration
to Web-based customer care. From a people and process standpoint, the
foundation lies ultimately on the CSRs and the methods by which they deliver
customer service. These resources should be leveraged yet retooled to
support new customer interfaces.

From a technology standpoint, the foundation consists of an IT infrastructure
that is flexible enough to accept change. This entails an infrastructure
layer that abstracts the particulars of a customer interface or communication
channel and presents a unifying “interaction model,” enabling the use
of common business applications.

The Outsourcing Advantage

Many of the communication channels mentioned, such as email and chat,
are software-enabled yet require a human touch. Because of the people,
processes and technology needed to run a successful customer interaction
center, many companies choose to outsource their customer service operations.
The strategy of outsourcing allows an organization to focus on its own
core competencies while entrusting an experienced partner to service its
customers.

Outsourcing, specifically of CRM functions, offers the following advantages
vs. an in-house customer interaction center:

  • Outsourced CRM provides dedicated support for your programs without
    the need to develop an expensive infrastructure.

  • Outsourcing provides the ability to employ advanced technologies
    for competitive advantage without the financial burden of purchasing,
    installing, and maintaining expensive systems.

  • Outsourcing allows a company to focus its attention on key internal
    issues and develop internal core competencies.

  • Programs can be added, subtracted, or expanded without battling internal
    restrictions or physical limitations.

  • By eliminating the need to physically develop facilities and personnel,
    companies can react more quickly to changing market conditions and
    make decisions based on long-term needs, not physical plant or human
    resources issues.

  • Freed from the need to develop internal expertise, the demands on
    available human resources are diminished, while talent and access
    to leading practices are retained.

  • Lastly, it is always easier to be the customer than the manager.

But, the question often remains, “How do you know when to implement an
outsourcing arrangement if you already have an in-house facility?”

This question is one that should be addressed in a careful, considerate,
and objective fashion, so as to avoid any appearance of impropriety, and
to make certain that the client gains the most from the transition. Since
not every company will find outsourcing the proper move, not every moment
is ripe for making a shift in your customer service platform.

One of the driving factors leading to outsourced customer care functions
is the issue of technology. Simultaneously the lifeblood and the bane
of a customer interaction center’s existence, technology advances require
a constant investment in order to stay current, let alone to take a leadership
role.

If a company is experiencing difficulties getting database and programming
support from its IT department, this is a warning sign. Internal IT priorities
are not always in line with call center needs, and if the IT relationship
isn’t sufficient now, it will likely not improve in the near future. Outsourcing
can offer significant opportunities for improvement in this area.

Facilities are another key component to success. A company faced with
increased competition might find itself expanding quickly and needing
more space, yet finds none readily available. In today’s business world,
companies are usually loath to invest in more “brick and mortar” which
means your expansion needs could likely meet a brick wall.

Finally, personnel issues are central to successful operations in the
customer interaction center. Tight labor markets often mean difficulties
in recruiting. That in turn may well affect a company’s ability to staff
up to meet service level targets and expanded hours of operations, all
the while maintaining a profitable profile.

Outsourcing all or some of your customer care services can remedy many
of the aforementioned maladies.

Conclusion

Whether a company manages its call center in-house or chooses an outsourced
partner, Web-enabled customer care is now a business imperative. By recognizing
that a full-service customer interaction center can help attract new customers
and build loyalty, while increasing profitability and productivity, companies
will have a competitive advantage. The utility industry, specifically,
places great emphasis on customer feedback in order to help determine
policies and procedures for their call centers. So, whether the company
is adding new technologies, re-engineering websites or consolidating call
centers, today’s utility companies must have high-powered plans in the
works to compete and succeed.

Footnotes

1 J.D. Power and Associates, “2000 Cross-Industry Call Center/Customer
Satisfaction Report,” Used with permission.

2 J.D. Power and Associates/Navigant Consulting, Inc., “2000 Electric
Utility Residential Customer Satisfaction Study,” Used with permission.

3 Forrester Research, “Customer Interaction Outsourcers,” January 1999.

4 Forrester Research, “Driving Sales with Service,” November 1999.

5 Jupiter Communications, “Customer Service Survey Results of 125 Websites,”
3rd Quarter, 1999.

Recurring Payments by MasterCard

Through deregulation, federal and state government reforms, and rising competition,
utilities are faced with tough challenges. There is increasing pressure to reduce
costs and increase operating efficiencies, while at the same time, providing
the high level of service that consumers are demanding. Accepting MasterCard
cards for recurring payments can be an important component in helping you achieve
both your financial and customer service goals, by helping your company reduce
costs, receive payment faster, reduce losses from processing problems, and offer
additional payment options to your customers.

More than 79 percent of U.S. households carry a bankcard (such as a MasterCard)
for the convenience, the flexibility, or to earn rewards benefits. And today,
these cardholders are looking for more opportunities to use their cards. As
a result, many service-oriented industries, including utilities companies, are
accepting payment cards, offering their customers an attractive payment alternative
to cash and checks.

Combine this convenience with improved customer service and reduced operating
costs, and its no wonder card acceptance is becoming an integral part of many
utility companies’ strategic plans.

For 30 years, MasterCard has been a premier payment system in America. Now
you can take advantage of all we have to offer, including materials and programs
designed specifically for utilities service providers interested in (1) accepting
payment cards for the payment of utility goods and services and (2) participating
in recurring payments programs.

Imagine the ability to streamline your billing process and build customer loyalty!
“Recurring Payments by MasterCard” can help you do this.

Recurring Payments: A Payment Mode

This white paper focuses only on recurring payments by MasterCard. In the dictionary,
the word recurring is described as an adjective meaning “to happen or come up
again repeatedly.” In the collections world, a recurring payment describes an
arrangement whereby a customer agrees to allow a company to bill against a specific
account, for mutually agreed-upon payment amounts (that can be fixed or variable)at
specified time intervals. It is yet another way — or mode — that a
customer can pay for goods or services on a growing list that includes in person,
by mail, by phone, kiosk, and Internet.

The payment methods that can be used for recurring payments include credit
card, debit card, charge card, private label card and debits against deposit
accounts. A monthly, quarterly, bi-annual, or annual Automated Clearing House
(ACH) debit against an existing deposit account is an example of a recurring
payment. Recurring payments typically occur monthly or quarterly and can be
a fixed or a variable amount.

An Increasingly Popular Way to Pay

Recurring payments by credit and debit card have been around for more than
25 years. Some service providers recognized both the merchant and consumer benefits
of this payment mode shortly after the credit card began to boom in popularity
in the early 1970s. Service providers wanted to make the payment process easier
for consumers. Some retailers wanted to give their customers the opportunity
to divide one purchase into installment payments of a fixed amount.

A 1996 study by the National Consumers League and the Opinion Research Corporation
found that 20 percent of its survey respondents who have a credit card have
“given permission to a company to make regular, monthly, or yearly charges against
a credit card.”

Many service providers can offer their customers the recurring payment option,
but several industry categories stand out as the most likely to provide it:

• Utilities
• Telecommunications
• Cable
• Insurance
• Internet Service Providers
• Charities
• Security Services and Protective Agencies
• Retail/Catalog
• Memberships
• Subscriptions
• Professional Services
• Home Services (Cleaning, Lawn Care, Pools)
• Child Care

These industries account for over $1 trillion in consumer payments each year
in the U.S. and about $820 billion in recurring charges. Though some of the
industries have built their entire collection effort around recurring transactions
by payment card (satellite television, Internet services), in most cases the
penetration of card payments as a percentage of all payment methods (checks,
cash, ACH, etc.) is less than two percent. Many industry observers believe that
promoting recurring payments by payment card is a way to encourage card acceptance
in these highly under-penetrated merchant categories.

Consumer Demand

Consumers Want the Recurring Payments Option

In focus groups of consumers conducted in mid-1996, MasterCard learned that
consumers would like to have the option to pay on a recurring basis by using
credit and debit cards. Focus group participants universally reported feelings
of drudgery and anxiety when discussing the process of paying monthly bills.
“Doing the dishes is more fun,” said one participant. These consumers believe
that recurring payments will result in fewer checks written, reduced anxiety
by not having to spend as much time “watching money being drained from my account,”
and the potential to earn rewards feature (such as frequent flyer and automobile
rebate programs).

Those focus group participants who favored the concept suggested the benefits
of recurring payments by card versus traditional ACH debits to be 1/the option
to revolve payments and 2/support from issuing banks in resolving billing disputes
with the service provider. There is some indication that the ability to execute
recurring payments by card could impact the consumer’s choice of card brand,
loyalty to a card issuer, and loyalty to specific service providers. “If I had
a bunch of those recurring charges on a card, I wouldn’t want to go through
the hassle of putting those charges on a different card,” one participant said.
“I’d be pretty much likely to keep that card unless I really had a problem with
it,” another said.

Certain consumers highlighted issues with recurring payments by payment card.
For example, some consumers cited a perceived lack of control with any recurring
payment (by ACH or card). Some customers would like to receive written or electronic
confirmation of successful recurring transactions. The National Consumers League
study found that 15 percent of credit cardholders that have used the recurring
payments option “had difficulty terminating these charges.” Of these, 31 percent
said it took “several months to stop the charges.” MasterCard saw nearly identical
results in a similar study. Some customers would not be comfortable with large
ticket transactions (e.g. mortgage payments) in the recurring mode. Focus group
participants indicated that promotional offers (e.g. discounts or additional
frequency awards) would stimulate recurring payment trial and ongoing volume.

Quantitative Study Underscores

Consumer Perspective

In late 2000, MasterCard conducted quantitative consumer research to better
understand consumers’ payment methods for recurring charges, including consumer
attitudes, current behavior, decision criterion and concerns. A very large percentage
of U.S. households receive recurring bills (usually monthly) for various services
(see Figure 1) including utilities, insurance, cable, and telephone service.

Insurance
90%
Telephone
70%
Cable/Satellite TV
69%
Utilities
66%
Mortgage/Home/Student Loans
44%
Magazines/Newspaper
42%
Auto Loans/Lease
36%
Internet/Online Services
34%
Figure 1 – Common types of recurring bills

Although check writing continues to be the dominant method of paying recurring
bills in many industries, consumers are rapidly moving toward other payment
alternatives. Sixty-nine percent of consumers today are involved with automatic
bill payments, with thirty-seven percent linking their automatic deductions
to a credit card. Another important finding was credit card recurring payment
(CCRP) users average 3.7 automatic recurring payments of which more than half
are CCRPs (2.1) The most prevalent alternatives to credit cards are automatic
paycheck and checking account deductions.

Our research proved that consumers have a high level of awareness surrounding
the credit card recurring payment option, however awareness differ by industry
(see Figure 2).

 
70%-84% Awareness 60%-68% Awareness Under 60% Awareness
Utilities Education Expenses Charitable Donations
Magazines/Newspapers Ongoing Health Care Public Transportation
Insurance Health Clubs  
Internet/On-line Services Auto Loans/Lease  
Cable/Satellite TV Mortgage/Home/Student Loans  
Telephone Real Estate Taxes  
Figure 2 Credit card recurring payments awareness

Consumers realize the relevant set of benefits associated with the credit card
recurring payment (CCRP) option. Convenience and stress relief ranked the highest.
Consumers were relieved to know their bills were being paid on time without
having to think about it and the amount of time they saved not having to write
checks (see Figure 3).
Reasons for CCRP Benefits/Advantages Reasons For Initial Use Convenience 53%
53% 42% Stress Relief 18% 45% 19% Only Option Offered 11% – 7% Financial Benefits
9% 15% 9%

Figure 3 Reasons for using credit card recurring payments

Customer Loyalty

Of importance to merchants, over one-third of consumers said they would switch
from one provider to another to enjoy the credit card recurring payment option
if offered (all else, equal), (see Figure 4).

 

  % Would Switch If Competitor Offered CCRP*
Total Random 55%
CCRP Users 70%
RP Users (Non-CC) 52%
Non-RP Users 41%
Figure 4 Opportunity for providers

A Growing Industry Trend

“Recurring Payments by MasterCard”

To address the growing demand for recurring payments, MasterCard introduced
a program for service providers, that is designed to significantly lower the
cost of acceptance, improve cash flow, and increase customer retention and loyalty.
The program is available to merchants — in the utility, insurance, telecommunications
and cable TV industries. This program, the Service Industries Incentive Program,
or SIIP, has multiple advantages for participating providers.

Service Providers who offer customers the ease and convenience of accepting
MasterCard-branded payment cards for recurring payments can take advantage of
this growing industry trend. By accepting MasterCard cards for recurring payments,
you will be likely to experience a wide range of measurable benefits, including:

• Improved cash flow — quicker turnaround than check processing
• Customer retention and loyalty
• Improved collections — reduced write-offs and bad checks
• Enhanced good will from offering customers a convenient payment option
they want
• Processing ease
• Secure transactions

By accepting MasterCard-branded payment cards, you’re offering your customers
loyalty-building benefits, too:

• Timely payment of ongoing bills
• Earning additional benefits including frequency points
• Ease, convenience, and security of using MasterCard

To support the introduction of recurring payments, MasterCard is offering participating
service providers an attractive incentive interchange rate for all qualified
transactions. In exchange, participating providers agree to promote “Recurring
Payments by MasterCard” to their customers, using a variety of tools ranging
from statement inserts to statement messages. Plus, MasterCard has created a
financial model, which can be used to help analyze the financial impact of MasterCard
card acceptance.

MasterCard International is committed to working with utility merchants to
help them meet their business needs. MasterCard believes that increased communication
and education with utility merchants about how to best administer recurring
payments will increase the utility of the MasterCard card for cardholders and
promote an increasingly seamless payment experience for all parties.

For more information about “Recurring Payments by MasterCard,” please contact
Kimberly Teague, Vice President Service Markets Sales, at 914-249-5474 or Kimberly
Teague

The Path of Transition to a Wires Company: A Case for Embracing Change

The choice between mobilizing for a fight or embracing change may now
seem obvious, but there were many that chose the former path, wasting
significant resources and time. We chose the latter – a position that
allowed us to be positioned well for the new environment.

The board and management team, which preceded my tenure as chairman and
CEO, deserves tremendous credit for its prescience and determination,
which I believe are distinct factors in UIL Holdings’ current strategic
position.

During the early days of legislative discussion, we mobilized support
for enacting legislation that was right for our company and the industry.
We chose not to try and over-reach in the legislative debate. It was important
to us that we stay focused on the end game – lower energy prices for our
customers and acceptable returns for our shareowners. We were able to
function as an agent of change that was not problematic to the legislative
process because we were an organization that was well-run and well-respected.
Our motives were generally not questioned.

When the debate was concluded and the restructuring legislation enacted
in 1998, we moved with great alacrity to sell our generation assets and
the standard offer. It was time to accelerate the transition to a wires
company.

The senior management team at UIL Holdings became convinced that our
strength was not as an energy marketer or generator. Our relatively small
territory and generation portfolio would place us at a distinct disadvantage
in the evolving industry that would be dominated by large regional, national,
and international companies. We felt our strengths were as a company that
would grow through solid management of our utility distribution business
– (the Wires Company) – and growth in non-regulated businesses that we
knew and managed well.

Figure 1
Significant growth of UIL earnings

Figure 1

Manage and Eliminate Risk

As we prepared for and ultimately began the actual transformation process,
we employed several parallel initiatives to manage and eliminate risk.

First, our strategic preparation for the 1999 regulatory restructuring
decisions energized our organization and allowed us to remain with a solid
base of regulated operations. Diligent preparation led to responsible
decisions by the State Department of Public Utility Control (DPUC) in
every major proceeding, including those involving stranded cost recovery
and the standard offer, which set customer price components for the next
four years.

Second, as the industry restructuring legislation was enacted, UI made
a number of strategic moves that have given us a distinct advantage.

We used a comprehensive approach to eliminate all energy exposure for
our shareholders and customers. In addition to selling our fossil generation
and preparing to sell our nuclear and fossil, we reached an agreement
with Enron to provide our standard offer service for the next four years,
thus managing what could be significant risk and exposure for customers
and shareowners.

As evidenced by the experience of the developing power markets – and
the spikes in clearing prices – our financial exposure would have been
enormous if we had taken another approach. It serves as a reminder that
we’re on the right path.

As we exited the electric generation business, we used the proceeds from
asset sales to strengthen our balance sheet. This has significantly improved
our credit rating and positioned us for a dynamic, yet balanced approach
to growth.

Focusing on Strength

The end of our role as an electric generation company gave us the opportunity
to focus on and augment our strengths – first, as a reliable and responsive
energy distribution company – and second, as a provider of energy-related
services.

Investing wisely in the right people has paid dividends in our principal
non-regulated units, just as the right people did for the utility.

One of these, American Payment Systems (APS), is the nation’s leading
supplier of walk-in bill payment processing services to the utility industry.
Last year, it processed 81 million payments and handled more than $8 billion;
it should handle close to $9 billion this year. APS recently acquired
QuikPay!, which gives it a PC-based technology to rapidly reach significantly
more customers.

Our other principal non-regulated business, Xcelecom (formerly Precision
Power, or PPI), has set out to become the leading provider of electrical,
and voice-data-video (VDV) services to industrial, commercial, and institutional
customers throughout the New England and the Mid-Atlantic regions.

In 1999, Xcelecom acquired Allan Electric Company and has recently acquired
the DataStore, JBL Electric, and Orlando Diefenderfer, profitable and
respected companies in New Jersey and Pennsylvania. Xcelecom will acquire
other strategically relevant companies in the coming months to complement
the Xcelecom businesses. Both Allan Electric and APS were profitable in
1999, and we anticipate increased profitability in 2000.

The goal of expanding our earnings base has led us into related growth
centers. We invested in other promising non-regulated businesses including
the most advanced combined cycle plant in Connecticut, built by Duke Energy
and Siemens Power, in which UIL Holdings now holds one-third interest
on a non-operating basis.

Investing Wisely

We’re also actively considering other purely financial investments of
commensurate profit potential. For instance, we’re working with a project
developer to assess the growth potential of a high voltage DC transmission
cable that will connect New England to Long Island as an alternate energy
supply path.

We’re carefully making modest investments in regional economic development
initiatives to benefit our shareowners and the region we serve. Even though
these investments are small, we’re already seeing significant returns
from them.

Our expectation for these non-regulated businesses over the next three
to five years is that they will yield at least 20 percent of our total
earnings, and their performance to date more than justifies this optimism.

The expectations for the non-regulated units are underscored by our recent
move to restructure UI as the holding company known as UIL Holdings. The
regulated and non-regulated businesses are separated under the holding
company, which provides the unregulated businesses with a better platform
for growth.

Our focus on plans, people and performance is showing tangible results.
Over the past four years, we have seen earnings growth of eight to ten
percent a year, a level we consider very respectable for our industry.

As we embarked on our ambitious initiative to get the plans in place,
one of the foundations was an active and aggressive communications program
for all of our major audiences – employees, customers, legislators, regulators
(and their respective staffs) and shareowners. We endeavored to ensure
each audience was receiving one consistent message of our proposals, plans,
and results.

Being visible with each constituent group, we were better able to work
with the forces of change anticipating needs and ensuring we had sufficient
time to make adjustments when needed.

Over the past 16 months, The Corporation has been carefully building
its managerial and employee team to meet the demands dictated by change.
Our role as a widely-recognized leading energy distributor, our promise
as a holding company of growth-oriented business units, and the changes
we foresaw in our industry have all been deftly managed by a team of enormous
insight and foresight. We have a team of leaders who have proven that
change can always be turned to our advantage.

Clearly, UIL Holdings Corporation has emerged from restructuring all
the stronger, having embraced change and prepared for the inevitable with
an aggressive plan carefully balanced by prudence. We’ve proven ourselves
to be a solid investment and a company willing to seize the moment, ruling
out no relevant opportunity that offers us real potential to grow in value
to our shareowners and customers.

As we underscore our vision of a solidly managed utility distribution
business and growth in non-regulated businesses that we know and manage
well, every manager and team leader at UIL Holdings has personally committed
to run this company as aggressively and intelligently as we can. That
is a continuing promise, and the results to date confirm that we are a
company that consistently delivers on its promises.

 

Regional Transmission Organizations: Millenium Order on Designing Market Institutions for Electric Network Systems

Congestion Zones

Full locational pricing at every node in the network is a natural consequence
of the basic economics of a competitive electricity market. However, it
has been common around the world to assert, usually without apparent need
for much further justification, that nodal pricing would be too complicated
and aggregation into single price zones, with socialization of the attendant
costs, would be simpler and solve all manner of problems. On first impression,
the argument appears correct. On closer examination, however, we find
the opposite to be true, once we consider the incentives created by aggregation
combined with the flexibility allowed by market choices. The debate continues,
but the negative evidence is accumulating.

For example, the first region in the United States to abandon a zonal
pricing model after it failed in practice was PJM, from its experience
in 1997 when its zonal pricing system prompted actions which caused severe
reliability problems. Given this experience, PJM adopted a nodal pricing
system that has worked well since March 1998.1 Subsequently, the original
one-zone congestion pricing system adopted for the New England independent
system operator (ISONE) created inefficient incentives for locating new
generation.2 To counter these price incentives, New England proposed a
number of limitations and conditions on new generation construction. Following
the Commission’s rejection of the resulting barriers to entry for new
generation in New England, there developed a debate over the preferred
model for managing and pricing transmission congestion.3 One zone was
not enough, but perhaps a few would do? In the end, New England proposed
go all the way to a nodal pricing system.4

A similar zonal congestion management market design created similar problems
in California, which prompted the Commission to reject a number of ad
hoc market adjustments and call for fundamental reform of the zonal congestion
management system. “The problem facing the [California] ISO is that the
existing congestion management approach is fundamentally flawed and needs
to be overhauled or replaced.”5 As a further example, the zonal pricing
system in Alberta, Canada, apparently produced a related set of incentives
that failed to give generators the price signal to locate consistent with
the needs of reliability: “Most of the electricity generation sources
are located in the northern part of the province and ever-increasing amounts
of electricity are being transported to southern Alberta to meet growth,
… [t]his is causing a constraint in getting electricity into southern
Alberta and impacting overall security of the high-voltage transmission
system.”6 As a result, Alberta has proposed a central generation procurement
process under the transmission operator to provide a means to get generation
built in the right place. Hence we have the ironic result of a supposed
simplification of the market under zonal pricing that seems headed towards
replacing central procurement by the monopoly utility with central procurement
by the monopoly transmission provider. This is hardly a true simplification,
nor is it consistent with the original intent to move towards a competitive
market and away from monopoly procurement.

Fact: A single transmission constraint in an electric network
can produce different prices at every node. Simply put, the different
nodal prices arise because every location has a different effect on
the constraint. This feature of electric networks is caused by the physics
of parallel flows. Unfortunately, if you are not an electrical engineer,
you probably have very bad intuition about the implications of this
fact. You are not alone.

Fiction: We could avoid the complications of dealing directly
with nodal pricing by aggregating nodes with similar prices into a few
zones. The result would provide a foundation for a simpler competitive
market structure.

The accumulating evidence reveals the flaws in this seductive simplification
argument.7 In reality, the simplification creates unexpected problems.
These problems in turn cause the system operator to intervene in the market
by imposing non-market solutions and socializing the costs. In the end,
the truly simple system turns out to be a market that uses nodal pricing
in conjunction with a bid-based, security-constrained, economic dispatch
for the real network, administered by an independent system operator.
Purchases and sales in the balancing spot market would be at the nodal
prices. Bilateral transactions would be charged for transmission congestion
at the difference in the nodal prices at source and destination. Transmission
congestion contracts would provide price certainty for those who pay in
advance for these financial “firm” transmission rights up to the capacity
of the grid. The system would be efficient and internally consistent.

Note that the problem with zonal price aggregation and poor incentives
does not extend to the use of market hubs within the framework of nodal
pricing.8 The hub-and-spoke model fits quite naturally within the nodal
pricing framework and has been operating successfully in PJM, producing
a liquid forward market at the PJM “Western Hub.”9 Market hubs can and
do provide virtually all the benefits of simplification often attributed
to zonal price aggregation. The difference between a hub-and-spoke model
and zonal price aggregation is simple: zonal aggregation gives you the
hubs without the spokes. The spokes capture the difference between the
nodal price and the market hub price. Zonal price aggregation assumes
these differences can be ignored, and then socializes the cost when than
cannot. And just as the wheel would not support the hub without the spokes,
the missing spokes in the zonal model lead to a collapse of price incentives
followed by the inevitable requirement for operator intervention.

In some cases, of course, the arguments offered for zonal price aggregation
may be true. The differences in nodal prices may be small, most of the
time, and the occasional excursions would not be commercially significant.
Or, to be more precise, the occasional excursions would not be significant
as long as the system operator did not socialize the costs. Under these
circumstances, there is a clear business opportunity. The RTO need not
and should not do anything different. Within this framework, an entrepreneur
would be free and able to set up a business that provided the aggregation
service, charging participants for the claimed benefits and providing
a revenue stream to compensate for the small risks involved.

When viewed from this perspective, the arguments in favor of zonal price
aggregation should not be seen as applying to the RTO. As we have learned,
when the RTO follows this path, trouble soon appears. Rather, the arguments
for zonal aggregation should be seen as either wrong or right. If wrong,
they should be ignored. If right, they should lead to a successful business.
But zonal price aggregation is usually a bad market design for an RTO.

Flowgates and Decentralized Congestion Management

The essential market ingredients outlined above include a coordinated
spot market integrated with system operations to provide balancing services
and congestion management. In principle, an alternative to central coordination
would be a system of decentralized congestion management that used the
same basic information as the system operator but could be handled directly
by the market participants.

The most prominent example of such a decentralized congestion management
model is the so-called “flowgate” approach. This is interesting as both
a theoretical argument10 and because it is the procedure embraced by NERC
as a principal market alternative to its administrative Transmision Loading
Relief (TLR) procedures.11 The details can be complicated, but the basic
idea is simple. The argument begins with the recognition that the contract
path model is flawed. Power does not flow over a single path from source
to sink, and it is this fact that causes the problems that lead to the
need for TLR in the first place. If a single contract path is not good
enough, perhaps many paths would be better. Since power flows along many
parallel paths, there is a natural inclination to develop a new approach
to transmission services that would identify the key links or “flowgates”
over which the power may actually flow, and to define transmission rights
according to the capacities at these flowgates. This is a tempting idea
with analogies in markets for other commodities and echoes in the many
efforts in the electricity industry for MW-mile proposals, the General
Agreement on Parallel Paths (GAPP), and related efforts that could go
under the heading of transmission services built on link-based rights.

For any given total set of power injections and withdrawals, it is possible
to compute the total flows across each line in the transmission network.
Under certain simplifying assumptions, it would be possible further to
decompose the flows on the lines and allocate an appropriate share of
the flows to individual transactions that make up the total loads. If
we also knew the capacity on each line, then presumably it would be possible
to match the flows against the capacities and define transmission services.
Transmission users would be expected to obtain rights to use the individual
lines, perhaps from the transmission line owner or from others who owned
these capacity rights.

In principle, these rights on each line might be seen as supporting a
decentralized market. Associated with each line would be a set of capacity
allocations to (many) capacity right holders who trade with the (many)
users of the system who must match their allocated flows with corresponding
physical capacity rights. Within this framework there are at least two
interesting objectives. First, that the trading rules should lead to an
efficient market equilibrium for a short period; and second, that the
allocated transmission capacity rights would be useful for supporting
the competitive market for geographically dispersed buyers and sellers
of power.

As a matter of principle, it is likely that the first objective could
be met. There should be some system of tradable property rights that would
be sought by users of the system, and in so doing would lead to an efficient
short-run dispatch of the system. This would seem to be nothing more than
an application of the principles of competitive markets with well-defined
property rights and low transactions costs. There is a general belief
that this short-run efficiency would be available in principle: “Efficient
short-run prices are consistent with economic dispatch, and, in principle,
short-run equilibrium in a competitive market would reproduce both these
prices and the associated power flows.”12 The problem has always been
with the natural definitions of the “physical” rights: these are cumbersome
to trade and enforce. The property rights are hard to define, and the
transaction costs of trading would not be low.

The second objective is perhaps more important. Presumably the allocated
transmission capacity rights would extend over many short-run periods,
for example, even only a few days, weeks or months of hourly dispatch
periods.65 Presumably a natural characteristic that would be expected
of these physical rights would be that a seller of power with a known
cost of power production could enter into an agreement with a distant
buyer to deliver a known quantity of power at a fixed price, including
the out-of-pocket cost for transmission using the transmission right.
Many other contracts could be envisioned, but this minimal possibility
would seem to be essential; and it is broadly taken for granted that this
capability will exist in the open-access transmission regime. However,
any approach that defines tradable physical capacity rights based on flows
on individual lines faces obstacles that appear to make it impossible
to meet this minimal test.

There are many variants of such link-based transmission rights that one
can imagine, and the industry has been struggling with these ideas for
years. Here the flowgate argument follows the outline above. The system
operators and others demur on the grounds that the electric system is
more complicated and there are simply too many lines and possible constraints
to manage in a decentralized environment. The proponents argue that it
is not necessary to consider all the lines and all the possible constraints.
Rather they propose to consider only a few critical constraints, the flowgates,
and to focus decentralized trading on these. The assertion is that the
commercially significant congestion can be represented by a system with:

  • Few flowgates or constraints.

  • Known capacity limits at the flowgates.

  • Known power transfer distribution factors (PTDF) that decompose a
    transaction into the flows over the flowgates.

Under these simplifying assumptions, the decentralized model might work
in practice. The RTO would identify the flowgates. The capacity rights
would be allocated or auctioned somehow to the market participants. Similarly,
the RTO would publish the PTDF table that would allow individual market
participants to compute the effect of their transactions on the flowgates.
The participants would then purchase the corresponding flowgate capacity
rights in the market. This trading of capacity rights would take place
in decentralized forward markets. Transactions that had assembled all
the capacity rights needed would then be scheduled without further congestion
charges. Real-time operations would be handled somehow, typically not
specified as part of the flowgate model.

There is some experience with this flowgate model. However, the experience
is limited and what experience we do have is not good. In particular,
these simplifying assumptions and the corresponding flowgate model for
decentralized congestion management were applied as part of the NERC Pilot
Project for Market Redispatch in 1999, to create a decentralized alternative
to administrative TLR curtailments. In the end, and despite the substantial
turmoil created by the TLR system, the result was that apparently there
were no successful applications of any decentralized trades under this
approach.14 By contrast and at the same time, the centralized coordinated
market in PJM regularly provided successful market alternatives to administrative
TLR curtailments. Perhaps the flowgate problems will be ironed out as
the NERC experiment continues,15 but the experience reinforces the need
to look more closely at the flowgate model.

Despite the appeal of a move away from the contract path model and closer
to the actual underlying reality of the transmission network, these generic
methods built on flowgate rights must confront the problems inherent in
the simplifications. Are there only a few flowgates? Are the capacity
limits known in advance? Are the PTDF impacts stable and known in advance
of real-time?

Those who demur in accepting the flowgate model as a method for organizing
the use of the transmission system would answer in the negative for each
of these three questions. First, there are many potential constraints,
so it would be necessary to obtain many capacity rights on flowgates.
The number of rights that would have to be acquired in a complete version
of a flowgate model generally would not be determined simply by the amount
of power that flows in the actual dispatch. Under current practice, the
system operators typically adhere to “(n-1) contingency” constraints on
power flows through the grid. This means that the allowed power loads
at every location in the transmission system must be such that in the
event one of series of possible contingencies occurs, the instantaneous
redistribution of the power flows that results will still meet minimum
standards for thermal limits on lines and will still avoid voltage collapse
throughout the system. We can think of the terminology as coming from
the notion that one of the “n” lines in the system may drop out of service,
and the system must still work with the (n-1) lines remaining. The actual
contingencies monitored can be more diverse, but this interpretation conveys
the basic idea of an (n-1) contingency-constrained power flow.

Hence, a single line may have a normal limit of 100 MW and an emergency
limit of 115 MW.16 The actual flow on the line at a particular moment
might be only 90 MW, and the corresponding dispatch might appear to be
unconstrained. However, this dispatch may actually be constrained because
of the need to protect against a contingency. For example, the binding
contingency might be the loss of some other line. In the event of the
contingency, the flows for the current pattern of generation and load
would redistribute instantly to cause 115 MW to flow on the line in question,
hitting the emergency limit. No more power could be dispatched than for
the 90 MW flow without potentially violating this emergency limit. The
90 MW flow, therefore, is constrained by the dispatch rules in anticipation
of the contingency. The corresponding prices would reflect these contingency
constraints.17

Depending on conditions, any one of many possible contingencies could
determine the current limits on the transmission system. During any given
hour, therefore, the actual flow may be, and often is, limited by the
impacts that would occur in the event that the contingency came to pass.
Hence, the contingencies don’t just limit the system when they occur;
they are anticipated and can limit the system all the time. In other words,
analysis of the power flows during contingencies is not just an exception
to the rule; it is the rule. The binding constraints on transmission generally
are on the level of flows or voltage in post-contingency conditions, and
flows in the actual dispatch are limited to ensure that the system could
sustain a contingency. Operation of a complete flowgate model, therefore,
would require a trader to acquire the rights on each link sufficient to
cover its flows on that line in each post-contingency situation.

A sometime argument is that this problem is not serious because the actual
dispatch will have only a few of the potential constraints actually binding.
Typically this is true, but it does not avoid the difficulty for the simple
reason that we don’t know in advance which constraints will be binding.
Were it otherwise the system operator would not have to monitor all the
constraints that are typically considered. In fact, the large list of
potential constraints monitored by the system operator is already a select
group identified as the important subset from the thousands or millions
of possible constraints that could be defined given the large number of
lines and the large number of contingencies. The mere fact that the system
operator as identified the constraints would arguably be enough to require
an associated flowgate capacity right in order to ensure that the resulting
transaction would be feasible.

The accumulating experience in PJM is well documented and amply illustrates
the point. In one outside study intended to support the development of
a zonal model and decentralized congestion management through something
like a flowgate model, a set of 28 constraints were identified as important
and analyzed for the variations in the equivalent of a PTDF table. While
28 may seem a large number and difficult to deal with in assembling the
capacity rights to use the transmission system, it turned out not to be
large enough. In the event, the first six months of operation of locational
pricing in PJM found 43 constraints actually binding. Most importantly,
none of these actual constraints were in the list of 28 supposedly easy-to-identify
flowgates.18 This suggests the magnitude of the difficulties faced when
predicting which constraints will be binding.

The obstacle of too many constraints to specify a complete flowgate model
might be overcome if it were still possible to identify in advance how
much capacity there is at each flowgate. This is an old problem with the
uncomfortable reality that for many of the constraints it is not possible
to specify the limiting value without also knowing the pattern of the
loads. For example, interface constraints for voltage protection are routinely
described as a range of maximum values on real power flows, with the actual
value being set and changed regularly during real time operations. The
PJM Eastern Reactive Transfer Limit is reset at least every 15 minutes
and can vary over a range of 4000 MW to 7000 MW, depending on system conditions.20
This is essentially the same problem as defining the available transmission
capacity. As the New York Power Pool (NYPP) observed in a typical comment
heard from system operators:

“The primary responsibility of the NYPP system operator is and must
be to maintain the reliability of the bulk power system. The operator
must have the flexibility to decide, for example, what level of transmission
reserve capacity should be retained under various conditions and facilities’
loadings to meet contingencies as they may arise. Thus, actual transmission
availability, or, more correctly, available transmission transfer capability,
may be less than the thermal limits of the facilities, and the difference
may change as conditions change. The Commission should make certain that
all participants understand and accept these factors
. 21

In addition to recognizing that the capacity limits are not always known
in advance, the other reality is the lack of truly stable and known PTDF
tables. The flows over the lines and voltages at the buses will depend
on all the other receipts and deliveries on the grid. Thus, the flow over
a particular flowgate that can be attributed to a particular transaction
will be changing all the time, so it will be difficult to know how much
of a flowgate capacity right is required or how much would be used. There
are many causes of this ex ante ambiguity in the PTDFs. First, the PTDFs
are a function of the entire configuration of the grid. With any line
out of service, there are different PTDFs, and the configuration of the
grid is changing all the time. Even with the same configurations on the
wires, there are many electrical devices, such as phase angle regulators,
whose very purpose is to change the apparent impedance of lines as a function
of changing loads and, therefore, to change the PTDFs throughout the system.
Furthermore, there are inherent nonlinearities in the flows and constraints,
especially the ubiquitous so-called “nomogram” constraints that attempt
to approximate even more complex interactions in the system. It is simply
not true that the real system conforms to the simplified textbook approximation
of the pure DC-load model that is useful for illustrating the effects
of network flows, but that is at best only a linearized local approximation
of the real system that can be used to guide the dispatch. It is for these
reasons that PJM updates both the load flow estimate and calculation of
its equivalent of PTDF tables every five minutes.22 In reality, the PTDFs
needed for a complete flowgate model would be anything but known in advance.

These criticisms of the flowgate model sit at the foundation of the argument
that it would not be an appropriate model for operating the power system.
However, as with the arguments above for zonal congestion management,
the criticisms may be less applicable to a commercial model that would
serve as an entrepreneurial business. Suppose that the criticisms are
correct, but the commercial significance is small. The RTO could operate
the coordinated dispatch and define financial transmission rights as outlined
above for the real system rather than the flowgate approximation. Although
it is not possible to identify all the components of the flowgate model
in advance, it is possible to determine in advance if a particular load
flow would be feasible. Hence, despite the complexity of the grid, a set
of simultaneously feasible point-to-point financial transmission rights
can be defined. For a given configuration of the grid, the RTO can guarantee
the point-to-point FTRs without using its powers to tax the participants
and socialize the costs.

Under the simplifying assumptions of the flowgate model, it would be
possible to decompose these point-to-point financial transmission rights
into their component flowgates, implied flow capacities on flowgates,
and the associated PTDFs. If the approximation errors of the flowgate
model are not large, then it would be possible for a new business to provide
the service of organizing trading of flowgate rights that could be reconfigured
to create new FTRs. The differences in flows and capacities might be small,
most of the time, and the occasional excursions would not be commercially
significant. Or, to be more precise, the occasional excursions would not
be significant as long as the system operator did not socialize the costs.
Under these circumstances, there is a clear business opportunity. The
RTO need not and should not do anything different than outline above as
part of the essential market design. Within this framework, an entrepreneur
would be free and able to set up a business that provided the flowgate
service, charging participants for the claimed benefits and providing
a revenue stream to compensate for the small risks involved. In effect,
the business could take the financial risk that the reconfigured FTRs
might not be feasible in the real network, but if the flowgate assumptions
are valid this risk would be small.

When viewed from this perspective, the arguments in favor of the flowgate
approach should not be seen as applying to the RTO. When the RTO follows
this path, trouble is likely to appear because the real system is more
complicated. Rather, the arguments for the flowgate approximation should
be seen as either wrong or right. If wrong, they should be ignored. If
right, they should lead to a successful business. But the flowgate model
is likely to be a problematic market design for an RTO.

Footnotes

1 William W. Hogan, “Restructuring the Electricity market: Institutions
for Network Systems,” Harvard-Japan Project on Energy and the Environment,
Center for Business and Government, Harvard University, April 1999, pp.
37-44.

2 The use of zones for collecting transmission fixed charges is not the
issue here. The focus is on managing transmission congestion. For a critique
of the previously proposed one-zone congestion pricing system, see Peter
Cramton and Robert Wilson, “A Review of ISO New England’s Proposed Market
Rules,” Market Design, Inc., September 9, 1998.

3 Federal Energy Regulatory Commission, New England Power Pool Ruling,
Docket No. ER98-3853-000, with October 29, 1998.

4 ISO New England, “Congestion Management System and a Multi-Settlement
System for the New England Power Pool,” FERC Docket EL00-62-000, ER00-2052-000,
Washington DC, March 31, 2000. The proposal includes full nodal pricing
for generation and, for a transition period, zonal aggregation for loads.

5 Federal Energy Regulatory Commission, “Order Accepting for Filing in
Part and Rejecting in Part Proposed Tariff Amendment and Directing Reevaluation
of Approach to Addressing Intrazonal Congestion,” Docket ER00-555-000,
90 FERC 61, 000, Washington DC, January 7, 2000, p. 9. See also Federal
Energy Regulatory Commission, “Order Denying Requests for Clarifications
and Rehearing,” 91 FERC 61, 026, Docket ER00-555-001, Washington DC, April
12, 2000, p. 4.

6 “Alberta Transmission Czar Wants More Generation,” Electricity Daily,
Vol. 14, No. 77, April 21, 2000, p. 3.

7 William W. Hogan, “Nodes and Zones in Electricity Markets: Seeking
Simplified Congestion Pricing” in Hung-po Chao and Hilliard G. Huntington
(eds.), Defining Competitive Electricity Markets, Kluwer Academic Publishers,
1998, pp. 33-62. Steve Stoft, “Transmission Pricing in Zones: Simple or
Complex?”, The Electricity Journal, Vol. 10, No. 1, January/February 1997,
pp. 24-31.

8 William W. Hogan, “Restructuring the Electricity Market: Institutions
for Network Systems,” Center for Business and Government, Harvard University,
April 1999, p. 52, available from the author’s web page.

9 “The New York Mercantile Exchange will launch an electricity futures
contract March 19 at the PJM western hub, one of the most liquid markets
in the Eastern grid. … The PJM hub already features an active and growing
over-the-counter forwards market. A liquid hub can have a downside [for
the futures contract] given that players are content trading in the OTC,
said one Northeast broker.” Power Markets Week, February 8, 1999, p. 14.

10 Hung Po Chao and Stephen Peck, “A Market Mechanism for Electric Power
Transmission,” Journal of Regulatory Economics, Vol. 10, No. 1, 1996,
pp. 25-59. Steven Stoft, “Congestion Pricing with Fewer Prices than Zones,”
Electricity Journal, Vol. 11, No. 4, May 1998, pp. 23-31.

11 Congestion Management Working Group of the NERC Market Interface Committee,
“Comparison of System Redispatch Methods for Congestion Management,” September
1999.

12 W. Hogan, Contract Networks for Electric Power Transmission,” Journal
of Regulatory Economics, Vol. 4, 1992, p. 214.

13 This is apart from the problems encountered with changes of the grid
capacity or configuration. Link-based rights have other substantial problems
for dealing with system expansion.

14 Congestion Management Working Group of the NERC Market Interface Committee,
“Final Report on the NERC Market Redispatch Pilot,” November 29, 1999,
filed with FERC on December 1, 1999.

15 NERC, “Market Redispatch Pilot Project Summer 2000 Procedure,” March
31, 2000.

16 Expressing the limits in terms of MW and real power is shorthand for
ease of explanation. Thermal limits are actually in terms of MVA for real
and reactive power.

17 Jacqueline Boucher, Benoit Ghilain, and Yves Smeers, “Security-Constrained
Dispatch Gives Financially and Economically Significant Nodal Prices,”
Electricity Journal, November 1998, pp. 53-59.

18 Richard D. Tabors, “Transmission Pricing in PJM: Allowing the Economics
of the Market to Work,” Tabors Caramanis & Associates, February 24, 1999,
p. 31. This is a careful study that is among the rare instances with easily
available and documented assumptions. See the PJM web page for the record
of actual constraints.

19 See the PJM web page spreadsheet report on historical transmission
limits, “Historical_TX_Constraints.xls.” Over the period January 1998
to April 2000, there were 610 constraint-days recorded, with the same
constraint appearing on more than one day. Based on the “Monitor” and
“Contingency” names, there were 161 unique constraints.

20 Andy Ott, PJM, personal communication.

21 Comments of Member Systems of the New York Power Pool, “Request for
Comments Regarding Real-Time Docket Information Networks,” No. RM95-9-000,
Federal Energy Regulatory Commission, July 5, 1995, p. 9-10.

22 Andy Ott, PJM, personal communication.

Regional Transmission Organizations: Linchpins of Restructuring

Recognizing the new reality, the Federal Energy Regulatory Commission
(FERC) declared in Order No. 888 that, with easy entry, plentiful gas
supplies, and competing marketers, new plants could charge market-based
rates. In effect, we deregulated the wholesale generation market, over
which FERC has jurisdiction, on a prospective basis. States, in charge
of retail sales, began passing laws rescinding the monopolies of utilities
and offering customers the opportunity to change suppliers periodically.
When retail choice finally emerges, both ends of the electricity market,
the production entrance and the consumption exit, will stand ready to
benefit from restructuring. Regional Transmission Organizations (RTOs)
provide the missing link, transmitting the power from the generator to
the consumer. That makes RTOs the linchpin of restructuring.

FERC Order No. 2000 and RTOs

Cost of service regulation ensured that transmission became the “stepchild”
of generation. Utilities made more profit from costlier plants than from
transmission. As a result, thinkers saw the potential for owners to stifle
competition, by shutting out competing suppliers from access to the grid.
Order No. 888 took a small remedial step. It required public utilities
to file open access tariffs and to “buy” transmission under it, even for
their own retail customers. Open access required a lot of regulation and
engendered disputes over allegations of abuses by integrated utilities,
real and imagined. FERC saw the need to go further.

These efforts culminated in Order No. 2000, which established a more
ambitious goal for a regulatory agency. FERC defined a new utility, the
RTO, an entity that represented “a viable, stand-alone transmission business.”
FERC declared that the RTO could operate as a for-profit company or a
not-for-profit institution, or a combination. Order No. 2000 gave the
RTO a business plan, four characteristics, and eight functions, as follows:

Four Characteristics of the RTO Business Plan

  • Independence from market participants, namely, sellers of electricity.
    A for-profit transmission company may allow integrated utilities to
    retain passive ownership interests (a strictly financial stake) in
    the RTO without restriction. FERC placed some restrictions on active
    (voting) ownership – a safe harbor (no questions asked) of 5 percent
    for an individual, a benchmark of 15 percent for a class and a presumption
    of a five-year duration. The benchmark allows for different results
    in individual cases, and the presumption permits more if the public
    interest requires it. Passive owners may retain special rights to
    protect the integrity of their capital investment. FERC indicated
    it would audit independence first, after two years, and every three
    years later. Aside from mentioning independent governing boards, FERC
    gave no guidance to not-for-profit institutions.

  • Scope and Regional Configuration to show a potentially viable market

  • Operational Authority over the grid

  • Short-term reliability

Eight Functions of the RTO Business Plan

  • Tariff administration and design to ensure that the RTO controls
    the rates that customers see, and, in that way assure proper price
    signals

  • Congestion management, preferably through market means, including
    expansion to prevent inefficient allocation of resources

  • Parallel path flows to ensure the transmission customer pays for
    its use of the system

  • Ancillary services, reserves and the like, as a provider of last
    resort

  • OASIS, including information for prospective customers to enable
    transactions to occur. Again, a for-profit company would have to handle
    this.

  • Market monitoring, either itself of through a contractor to ensure
    that competition works

  • Planning and expansion of the system, including ordering construction
    when necessary. Here, again, a for-profit company would have to undertake
    this function in order to remain a going concern.

  • Inter-regional coordination with other RTOs and utilities to ensure
    smooth passage for electricity across the grid. Coordination includes
    standards for technical and communications issues. A for-profit company
    has every reason to succeed here.

The novelty in Order No. 2000 arises from the decision to offer incentives
for RTO formation, rather than mandate the result. FERC understood that
the market, not regulatory fiat, would ensure the viability of stand-alone
transmission businesses. FERC also understood that cost of service regulation,
which made sense at a time when policy makers thought they needed to regulate
a natural monopoly, hindered the development of competition. Order No.
2000 stated that FERC stood ready to grant these eight price reforms:

Eight Price Reforms

Rate moratoria that freeze prices (effective until January 1, 2005)

  • Rate moratoria that assume a static capital structure (effective
    until January 1, 2005) even if the company leverages toward more debt

  • Higher rate of return to reflect increased risk of stand-alone (no
    longer diversified) business

  • Rate of return that changes according to a pre-determined formula

  • Shorter depreciation, such as the life of the contract, for a new
    plant

  • Favorable changes in depreciation – to straight line away from accelerated
    – for existing facilities integrated into an RTO

  • Incremental pricing in the form of a surcharge added to the access
    charge for customers that need new facilities

  • Performance-based rates with internal or external indices.

FERC also said that RTOs may request any other reform upon a showing
of benefit through a cost-benefit analysis, and needed to meet the goals
of Order No. 2000, a viable, stand-alone transmission business. The preamble
discusses the possibility that for-profit companies may seek a new method
of calculating return and may ask for acquisition adjustments. FERC would
not use the RTO Rule as a vehicle for generic reform of the current discounted
cash flow method for calculating return. On acquisition adjustments, Order
No. 2000 restated current policy that allows revaluation of the rate base
to match purchase price, rather than the original cost, if the utility
shows a “demonstrable public benefit.”

While for-profit RTOs have a need for price reforms to increase efficiency
in the grid and could apply the eight listed in Order No. 2000, a not-for-profit
system operator could not make the showing or apply these reforms. In
the latter case, the money would flow to integrated utilities, as the
transmission grid’s owners, the very group FERC concluded had the incentive
to hold back the march toward competition.

RTOs and Restructuring

In any competitive market, companies reduce costs to increase earnings.
With the incentives FERC promises in Order No. 2000, so will for-profit
RTOs. Just as in any market, pleasing the customer creates more business
and higher profit, so, too, in transmission. This will most likely occur
in the for-profit context, with its lack of complex checks and balances
among competing interest groups and heavy-handed regulatory oversight.
Order No. 2000 allows the profit motive to work its magic. The RTO must
please the sellers of electricity, whether generators or marketers, as
the integrated utilities no longer control the grid. The stand-alone transmission
business depends for success on generators connecting to the grid or marketers
seeking to transmit to customers. The integrated utility, in contrast,
deals mostly with the diffuse class of ultimate retail consumers, except
for wholesale sales that account for about 10 percent of revenue.

Using Order No. 2000, transmission companies can prevent congestion problems
from arising in the first place by working with generators and consumers.
With incentive rates to guide them, for-profit organizations will better
maintain their facilities to reduce breakdowns. For-profit RTOs will more
likely expand the grid and help generators make the right decisions about
location.

Moreover, if congestion does occur, for-profit RTOs with proper incentives
offer more efficient and innovative solutions. These include inducing
customers to shift consumption to off-peak times or to reduce demand altogether
during peak hours. For-profit RTOs would create a more hospitable environment
for distributed generation. Rather than viewing it as competition, for-profit
RTOs will look favorably on fuel cells and mini-turbines as tools to improve
resource allocation.

With the company dependent for its earnings on market criteria, not cost
of service, the customers and the utility acquire common interests. These
or similar benchmarks for for-profit RTOs create the same mind set among
managers. Increases in volume over the transmission grid mean more profit
when increases prove economical. Because the for-profit RTO is not a generator,
it has no reason to restrict the flow of electricity. Integrated utilities,
on the other hand, protect earnings from their own generation by restricting
transmission capacity for other generators. Three of the indicators point
that way: throughput, reliability and price. Meeting or exceeding targets
on throughput and reliability mean more megawatts flow over the grid and
fewer outages. Lower prices also lead to greater volume.

The for-profit RTO, therefore, has every reason to create conditions
of plenty rather than scarcity. Other means of managing congestion include
reducing peak demand by paying customers to switch off just as the airlines
pay customers to leave overbooked flights. For-profit RTOs can also time
maintenance work to ensure peak operation during peak demand. The for-profit
RTO will invest in necessary technology, expand the grid when feasible
and otherwise try to accommodate customers, generators, and marketers.
Customer satisfaction also counts toward the for-profit RTOs profitability.

In the end, however, stand-alone transmission businesses can remain viable
under performance based rates. The more effective plans, such as the plan
I instituted in Mississippi, use internal criteria. Possible measures
include: volume of electricity flowing over the wires, length or percentage
of outages, customer satisfaction and percentage of requests granted.
The reward comes from enacting a price cap and allowing the RTO to earn
above its cost of service. The RTO would share the proceeds with customers.
The RTO would bear losses in the same way. The plan could include periodic
audits, but, in any event, eventually the RTO would have to return to
FERC for a rate case.

As easy as it sounds to recite the concept of performance based rates,
we admit the difficulty lies in the details. Setting the proper categories
of performance for the RTO requires thought. Setting the benchmark for
profit and loss requires deliberation. Establishing the sharing mechanism
and procedures for auditing requires an effort, as does establishing the
term of the plan. The need for finding the right answers gives the incentive
to include customers, meaning the sellers of electricity, as well as consumers.
The “stake holder process,” about which the backers of not-for-profit
institutions boast, belongs in the for-profit RTO as well. The difference
arises from the goal of the process. With the not-for-profit entity, it
is left to a debating society to decide lofty issues of plans and needs.
With the for-profit RTO the procedure establishes the basis for transactions,
the actual transmission rates and the expectations of the market.

We urge customers and consumers to negotiate with the for-profit RTO.
FERC conducted conferences to help the industry and customers form stand-alone
transmission businesses. These meetings should continue, once the for-profit
RTO forms and decides to operate under performance-based rates. This type
of rate program, that does nothing more than replicate the market, offers
the chance for sellers to discuss performance-based rates and criteria
and establish plans that will help sellers make sales, for RTOs to make
a profit and for the consumer to benefit from the promise of restructuring.

Pioneering Privatized Transmission: National Grid”s Perspective

Based on my experience at National Grid, I maintain that to be most successful,
transmission must be independent – in terms of management and ownership
– from generation and supply. Only full legal unbundling offers the cast-iron
guarantee of non-discrimination that is an essential building block of
a competitive market. As an added benefit, accounting in an unbundled
transmission operation becomes less ambiguous than the potentially murky
vertically-integrated price structures that can make life difficult for
regulators.

In addition, I believe that for maximum efficiency gains, transmission
ideally should be run by a private, for-profit company, as opposed to
a quasi-governmental organization or a non-profit entity. By making the
transmission operator subject to the disciplines of the equity and debt
markets, management is incentivized to make the right financial and operational
decisions, and is held accountable by shareholders and the energy market
itself, as well as regulators, for its decisions.

Despite my convictions on the subject, I admit that there is no one model
that can truly be deemed the best for all markets. But one thing is clear
– the structure of transmission is an important regulatory consideration
that must be addressed as part of any deregulation plan. If it is not
addressed at the outset, then experience suggests it will have to be addressed
soon after, as is the case in the U.S.

Deregulation: A Global Snapshot

During the past two years, Europe has been the stage for growing customer
choice, increasingly intense merger activity, and intensifying pressure
for reluctant governments and electric utilities to open up their markets
more rapidly.

Today, the continent is characterized by different countries in different
stages of deregulation. The Scandinavian countries, along with the U.K.,
already have fully open markets with independent transmission grids. At
the other end of the spectrum, France, Austria, and Italy have presently
opted for the minimum European Union (EU) market opening of around one-third
of electricity sales, with relatively little formal industry restructuring
and investor ownership.

The European Commission’s Electricity Directive, implemented in February
1999, requires management unbundling of transmission within vertically-integrated
utilities, but the pressure is on to go further. The European Commission
continues to press for full liberalization and the development of a single
European market and will bring forward further proposals for change later
this year.

In the U.S., about half of the 50 states are implementing customer choice
in some manner. Most others are at least studying deregulation. I believe
that it’s not really a question of “if” anymore; it’s more a question
of “when.” The first wave of deregulation focused on competitive generation
and supply, and open access to transmission. Restructuring of transmission
and system operations represents the next frontier in the U.S.

The U.S. Federal Energy Regulatory Commission (FERC) has endorsed the
concept of Regional Transmission Organizations (RTO) as a means toward
efficiencies and improved reliability, and those entities owning or operating
transmission are scheduled to provide details of their proposals for RTO
creation by the beginning of 2001.

Of course, electricity industry deregulation is now a global phenomenon,
and elsewhere in the world we see similar patterns of movement to those
we see in the U.S. and Europe. For example, in South America, countries
including Brazil, Argentina, and Chile have opened up their electricity
markets. Australia, like the U.S., is deregulating the industry on a state-by-state
basis, with Victoria the most advanced. Other countries are not so far
along, but these days it seems hard to find a country where electricity
privatization is not being considered or at least talked about. Even Russia
is exploring the option as a means of advancing much-needed investment.

Which Model Rules?

Although many countries are studying or implementing electricity deregulation
and privatization, a preferred model for transmission ownership has yet
to emerge. However, the U.K.’s success with privatized, for-profit transmission
should do much to convince people that this is the best model to follow.
Another option is partially separated transmission that is vertically
integrated with generation or supply, but few seem convinced that this
provides the optimal level of transparency.

Non-profit Independent System Operators (ISO) combined with utility-owned
transmission networks are found in various parts of the U.S. such as California,
the Midwest, and the Northeast. These entities have had varying degrees
of success. Theoretically they have facilitated competition, but they
have also come under fire for market intervention, such as setting artificial
price caps, as those regions struggle to build competitive markets. Critics
also claim they have yet to deliver the necessary levels of capital investment
and cost control.

Which leads us to Regional Transmission Organizations. One form of an
RTO is a Transco, which to most people means a for-profit, privately-run,
independent business, such as National Grid. Unlike ISOs, Transcos actually
own, or at least have control of, transmission assets, in addition to
having system operation responsibility.

While in the U.K. transmission and distribution were separated, no one
has yet come up with a compelling argument for dividing the two, if they
both remain regulated, and standards of conduct guarantee independent
and transparent operations.

The FERC initiative mentioned earlier calls for RTOs as a means of establishing
independent management and control of transmission, which could include
incentive-based rates, and more effective planning of transmission investments
to better manage congestion and the connection of new generation. This
in turn should help to reduce uncertainty and price volatility, which
is of increasing concern in the U.S.

U.S. utilities and ISOs are gearing up to respond to the FERC order,
but what the RTOs will actually look like in the end is still anyone’s
guess. In New England and elsewhere, it is proposed that the RTO actually
consist of a Gridco – a for-profit transmission entity – combined with
a not-for-profit ISO. In general, it is assumed that the Gridco would
operate the transmission network, and the ISO would provide system operation
including dispatch of generation.

Which is the correct model? Obviously, it depends on a number of variables
including political agendas and the strength of the incumbent utilities,
but a look at the U.K. experience provides substantial fodder for an argument
that the RTO model is the most efficient.

The Deregulation Pioneers

National Grid was created in 1990 during the restructuring and privatization
of the electricity industry in England and Wales. We own and operate one
of the world’s most complex transmission systems. We act as the system
operator, scheduling and dispatching generation to meet demand in accordance
with market rules, while ensuring consistent application of technical
rules. Currently we also provide market services such as the calculation
of wholesale prices, market settlement, and the publication of market
information. In today’s parlance, we are a Transco.

We hold a unique position as one of the first privatized transmission
companies in the world, operating in one of the world’s first fully deregulated
electricity markets. This has given us a unique perspective as we watch
other markets open up around the world. And in Argentina and the U.S.,
we are actually participants in the process.

When the U.K. government privatized and split up its vertically-integrated,
state-owned electricity industry, it was focused on the benefit to be
achieved. National Grid’s role was set out by statute and was crystal
clear – to run an economic, efficient, and coordinated transmission system,
and provide non-discriminatory, transparent, and equal access to our grid
for all generators and suppliers. The goal of privatized transmission
was equally clear and reinforced by our license obligation – to facilitate
competition in the generation and supply of electricity, including the
entry of new competitors into the market.

Regulators, politicians, and industry proponents staking their reputations
on the wisdom of deregulation can take comfort in the fact that the tangible
benefits of deregulation have already been proven in the U.K., and they
are very real:

  • Guaranteed access to the transmission system has spurred competition
    and new investment. Since 1990, more than 20 gigawatts (GW) of new generation
    have been commissioned.
  • Deregulation has increased choices (see Figure 1). When the U.K. industry
    was first liberalized, it had seven generators and fewer than 20 suppliers.
    Today there are more than 100 companies competing to generate and/or
    supply electricity.
Figure 1
Generation market share among major energy producers

Figure 1

  • Privatization has benefited the environment. Shifts in the fuel mix
    in the last 10 years, including the growth of natural gas from virtually
    zero in 1990 to around 40 percent of energy supplied today, will enable
    the U.K. to meet both its Rio and Kyoto targets, achieving carbon
    dioxide emissions reductions of 15 percent below 1990 levels by the
    end of the year, and 20 percent below 1990 levels by 2010.

  • Fifteen GW of predominantly coal-fired generation have been retired
    since 1990. Sulphur dioxide and nitrous oxide emissions are both down
    by around two-thirds since 1987, despite growth in electricity demand.

  • Prices have declined by 23 percent for residential customers in real
    terms since 1990, and about 25 percent for industrial customers.

  • Consumers are exercising their new-found choice. Customers of more
    than 1 megawatt (MW) were able to switch suppliers as of April 1990.
    By April 1999, 71 percent had done so. The second tier of customers,
    in the 100 kilowatt (KW) to 1 MW range, were given choice in April
    1994, and 51 percent had switched by April 1999. As of June 1998,
    all remaining customers – those of less than 100 KW – had choice,
    and 16 percent had switched by April 1999 (see Figure 2).

 

Figure 2
Customers who have switched suppliers

Figure 2

If we zoom in on transmission, the benefits for end-use customers are
equally compelling:

  • System availability at around the 99 percent mark is among the best
    in the world, despite growing demand.

  • Transmission controllable costs have been reduced by 50 percent since
    1990.

  • The cost to suppliers of transmission has been reduced 37 percent
    in real terms.

  • Compared with 1993, end-use customers enjoy £350 million in annual
    savings attributable to National Grid’s management of congestion and
    other market “overheads,” for which the company is incentivized by
    the regulator. This is a critical point, and one reason why an RTO
    structure with combined network and system operation is so compelling.
    The integration of these functions, when combined with incentive-based
    regulation, allows the RTO to exploit natural synergies. For example,
    to solve transmission congestion, we can make a sensible economic
    judgement between dispatching more expensive generation, shortening
    a transmission outage, or contracting with a customer to reduce demand.
    All options are at our disposal, for the ultimate benefit of the end-use
    customer.

All these achievements have been driven by what I believe is the overarching
and most important, but perhaps least recognized, benefit of privatized
transmission – management focus. By requiring transmission professionals
in the U.K. to concentrate on what we do best, the government forced us
to focus with incredible acuity on a single business, one that encompassed
transmitting electricity, facilitating competition, and minimizing electricity
costs.

This made us hone our business strategy and our operations, from network
management to accounting to R&D. As a result, we have achieved world-leading
efficiency, and innovative improvements across the company.

What’s Good for the Customer is Good for the Company

Not surprisingly, this success in the operations area has resulted in
financial success as well. It’s a maxim in business that companies that
concentrate on what they know best will do better than those who diversify
into areas they know little or nothing about.

National Grid has delivered dividend growth and strong shareholder returns
consistently in the five years since it was floated on the London Stock
Exchange.

Contributing to this success was management focus again, but manifested
in another way. After a few years spent trying to perfect the art and
science of running a national transmission network, National Grid began
transferring its transmission skills to other parts of the world, and
its core competencies of building and maintaining large networks to a
new industry, namely telecommunications.

Today, National Grid runs transmission systems in four countries and
has telecommunications ventures in six, providing shareholders with an
appealing combination of cash-generating electricity investments and value-enhancing
telecommunications investments.

It is unarguable that this “twin-strand” strategy, which is building
considerable value for shareholders, would not have been possible under
government or non-profit ownership.

Now we hope to share some of our privatized transmission expertise in
New England where we acquired New England Electric System and Eastern
Utilities Associates earlier this year. National Grid professionals who
have lived through U.K. deregulation are working with U.S. management
to build their knowledge of operating transmission in a competitive environment,
and deliver additional value to customers and shareholders.

Conclusion: Looking Ahead

Deregulation is an evolutionary process, not a quick fix. The U.K. market
continues to develop and to be refined. Currently, National Grid is working
with the U.K. regulator and market players in designing changes to the
wholesale trading market, to increase the effectiveness of the market
and drive prices even further down. Our role will be central to the success
of these changes.

But there is no doubt that the primary goals of restructuring – security
of supply and lowest possible cost – have been largely achieved in the
U.K., with transmission reliability at record levels and transmission
costs cut in half.

While there is no one restructuring model suitable for all regions of
the world, there is no question that on a purely economic basis, unbundled,
independent, privately-run, for-profit transmission is a model that can
create value for all stakeholders.