Examining Business Process Outsourcing

What does next-level utility performance look like? The energy industry may
be one of the most complex business environments, as it is capital-intensive,
regulatory-constrained, ROI-managed, environmentally challenged, often unionized
and accountable for near-perfect reliability.

Utilities that achieve “next level” performance in such a challenging environment
are skilled at balancing asset portfolios with regulatory constraints. They
understand that all assets must contribute to business performance and actively
seek effective partners and strategies.

Today, most utilities recognize the value of business process outsourcing (BPO)
to decrease customer service costs. However, some are beginning to consider
BPO to unlock the value of underperforming assets, assist in M&A integration,
enhance revenue, achieve cost certainty in rate recovery and act as a safety
net for skills lost to retirement.

With this in mind, we explore 10 questions that executives should consider
regarding business process outsourcing. As a number of North American utilities
have proven, a comprehensive approach can yield substantial performance results.

1. Core Processes vs. Noncore Processes: Which Are the Best Candidates for
Outsourcing?

When utilities evaluate business process outsourcing, the question often arises
– “Which processes are most advantageous to outsource – core or noncore?” Perhaps
the more critical questions utilities should ask are:

  • Are our key processes performing well?
  • Are they cost-efficient and effective?
  • Do they enhance or inhibit corporate performance and customer satisfaction?

If the answer to these questions is “no,” then perhaps it is time that business
process outsourcing be considered critical and therefore core, to a utility’s
performance – not ownership of the processes.

2. Should We Outsource Entire Processes or Leverage Stand-Alone Services?

The more a utility participates in end-to-end process outsourcing, the greater
the ability to increase performance, capture synergies and mitigate risk across
functional areas, revenue processes and technology investments. Fortunately,
the transition can be a stepped approach rather than an “all or nothing” decision.

Outsourcing single services allows the utility to incrementally reduce costs
and gain outsourcing experience – scaling the scope only after achieving tangible
value and quantified success. However, the transactional nature of the “start
small” outsourcing model provides few opportunities to add strategic value across
operational functions. In contrast, the greatest advantages in outsourcing entire
processes come from process synergies which result from control of related upstream
and downstream functions.

Without the constraint of operational silos, the outsourcer can redesign ineffective
or outmoded business processes that are inhibiting performance. Legacy processes
become standardized, often reducing or eliminating redundant functions and data
inaccuracy. Labor-intensive manual processes are often automated, allowing personnel
to be reallocated to more high-value tasks. This “hands on the wheel” control
enables the utility to gain synergistic improvement in process execution across
the entire organization.

3. Beyond Cost Savings, What Performance Advantages Can We Gain From BPO?

Most executives understand the cost-saving advantages that business process
outsourcing can provide. Others recognize BPO for its ability to transfer IT
risk. Beyond cost savings, there are five distinct advantages that business
process outsourcing provides utilities:

Stranded Assets – Unlock the Value: For utilities that rely on
legacy platforms or those who possess CIS assets that have not performed as
expected, transferring outdated and underperforming systems to a proven outsourcer
provides both balance sheet and capital investment relief – without Wall Street
scrutiny or the need for rate recovery.

Revenue Enhancement: In recent years, gain-sharing has emerged as an
important means to capture value. Structured properly, gainsharing agreements
allow the parties to consider opportunities that may be too risky to pursue
individually. Good candidates for gainsharing include bad debt, call volume,
IVR utilization and field crew efficiency.

Risk Mitigation: Most utilities have interfaced dozens of disparate
applications to legacy and secondary systems. This complexity leaves them vulnerable
to escalating maintenance costs, upgrade constraints and technology currency
risk. In contrast, business process outsourcing allows the utility to transfer
the operational and financial risk of large-scale IT initiatives to the provider.

IT Business Partner: Outsourcers use continuous improvement methodologies
to identify improvement opportunities, measure performance against goals and
forecasts and leverage best practices to standardize process performance. Typical
service-level agreements reward or penalize BPO providers for contract performance,
while scorecards assure the utility that service levels are achieved or exceeded.

Change Agent: As a change agent, the outsourcer ensures that the right
people, processes and technologies are aligned to the utility vision. This change
in roles, responsibilities and practices often increases service accountability.

More than a cost-cutting tool, business process outsourcing accelerates utility
performance through financial engineering, risk transfer, continuous improvement,
cultural accountability and revenue enhancement such as gain-sharing mechanisms.

4.
Workforce Retirement, Unions and BPO: Is There a Middle Ground for Our Employees?

Leading providers have deep experience in utility processes, practices and
platforms. They can often fill application integration and project management
gaps, manage revenue cycle (meter-to-cash) process requirements and provide
scale for fluctuating call center and IT demand. In a time of dwindling resources,
this support allows utility employees to focus on more high-value activities.

For unions concerned with employee displacement, many business process outsourcers
offer the ability to re-badge utility personnel as their employees. As members
of the outsourcing team, many of these utility employees gain exposure to new
technologies and process improvement skills that may advance their career options.

5. Can Business Process Outsourcing Benefit Rate Cases?

In regulatory environments where prudent expenditures are of concern, business
process outsourcing may enhance the success of rate-case allowances by demonstrating
to regulators that escalating service costs are better handled by providers
whose efficient operations directly benefit rate payers. Some commissions are
evaluating business process outsourcing for its ability to protect rate-payer
interests.

Yet, considering business process outsourcing as a rate-case contributor is
a paradigm shift for most utilities. Historically, rate cases have focused on
capital improvement and cost recovery – rather than demonstration of operational
savings. Viewed through an alternate lens, however, rate cases provide an opportunity
to align rates more closely with a utility’s current market conditions and with
changing customer expectations.

For those utilities that are considering business process outsourcing as good
stewardship of rate-payer assets, we offer the following suggestions for engaging
regulators in going beyond “standard service” definitions:

  • Be prepared with solid data on current service costs rather than relying
    on historical and inaccurate data;
  • Engage analysts and leading outsourcers to provide data on service benchmarks,
    cost models and the improvements customers may be willing to pay for; and
  • Seek agreement from regulators on outsourcing goals, then communicate your
    outsourcing strategy and business case with them before the rate-case horizon.

6. Does Increased Performance Demand Offshore Outsourcing?

With all the publicity, it is easy for executives to assume that outsourcing
means offshoring.

Onshore outsourcing is often the best option for utilities, as it combines
increased business performance with local control, community involvement and
the matching of cultural demographics. Near-shore outsourcing combines the benefits
of geographical proximity, time zone convenience and bilingual capabilities
with cost savings. Utility executives have typically shown little enthusiasm
for offshore outsourcing – particularly for front-office activities like customer
care. However, overseas providers are increasingly being evaluated for large-scale
programming or commodity processing.

7. What Should We Look for in a Next-Level Outsourcing Partner?

To determine the qualities that are effective in an outsourcing partner, it
is important to define your outsourcing strategy and the results you expect.
Leading outsourcers must demonstrate robust business continuity practices, mature
disaster recovery strategies and processes for contract disentanglement.

Financial Strength and Stability: Successful business process outsourcing
is capital-intensive. The provider should demonstrate solid financial performance
and stability over time, a history of regular technology investment and the
ability to acquire needed resources.

Broad Technology Experience: BPO providers should demonstrate a track
record of enhancing legacy systems, solving implementation and migration challenges,
streamlining and automating business processes and providing strategy for future
IT needs.

Utility Experience/Cultural Aptitude: In regulated markets, a business
process outsourcer should be highly familiar with regulatory requirements, jurisdictional
rules, rate recovery issues, customer care processes, union concerns and shareholder
expectations. Conversely, for those utilities operating in retail markets, the
outsourcer should demonstrate experience with market transactional processes,
provider-of-last-resort requirements, retail billing practices and oversight
agencies.

8. What Is the Most Effective Outsourcing Governance Structure?

Creating the appropriate governance structure is as essential as choosing the
right service provider. Be prepared to give your agreement the importance it
deserves by considering the following 10 criteria.

  1. Governance Strategy: Build an agreement that is tailored to your
    strategic goals.
  2. Collaborative Management: Define the expected savings, process improvements,
    performance outcomes and “actionable” changes you hope to capture.
  3. Roles and Responsibilities: Define the key roles, responsibilities,
    business processes and intersections between each organization.
  4. Compliance Reporting: Include requirements for examining the provider’s
    process compliance.
  5. Tracking Mechanisms: Identify every measurement that is important
    to success and the methods and tools that will be employed.
  6. Issue Resolution: Create a formal resolution process that defines
    when an issue should be escalated and who should be responsible for resolving
    the issue.
  7. Communications Plan: Build a communications plan that includes daily
    measures, weekly reports and regular presentations to an executive steering
    committee.
  8. Service-Level Agreements: Define key performance indicators (KPIs)
    for evaluating SLA performance, the frequency of measurement and whether performance
    rewards or penalties will be enacted.
  9. Change Control: Describe change control procedures, when a change
    should be recommended, how it will get approved and how it will be deployed
    through the organization.
  10. Business Continuity/Disentanglement: Establish a continuity plan
    that defines what will occur if the provider is in default or unable to meet
    service-level agreements.

9. How Should We Conduct an Outsourcing Evaluation?

A thorough business process sourcing evaluation is a complex, time-consuming
undertaking that demands a collaborative approach. There are many variations
to sourcing studies, but most evaluation approaches contain these basic elements:
planning, discovery and design. Each phase encompasses several steps. How these
are bundled or ordered is less important than simply ensuring that they are
completed successfully.

Phase 1: Planning
The planning phase is critical to conduct thoroughly
because it determines the nature of the evaluation, its scope, duration and
effectiveness. Essential questions should be addressed, such as what services
should be considered, what business objectives must be achieved and what financial
targets create an attractive alternative. This phase typically requires one
to two months.

Phase 2: Discovery
The goal of the discovery phase is to determine
whether outsourcing makes sense for your organization. Two distinct work streams
must occur in this phase. The first is focused externally on gathering information
from potential partners via RFIs or RFPs in conjunction with interviews, site
visits and reference checks. The second work stream is focused internally on
verifying internal costs and service levels – both as they occur today and as
they are expected to in the future. By the end of the discovery phase, multiple
providers may be identified as good candidates to move on to the design phase.
A thorough discovery phase typically lasts two to four months.

Phase 3: Design
The design phase encompasses the substantial activities
needed to tailor the provider’s solution to your needs. By this point, the buyer
and potential providers should have moved beyond the guarded exchange of information
to open conversations that expose the true opportunities to create value and
share risk. The goal of this phase is to reach executable agreements with one
or more potential providers. The time frame for completing design is two to
four months.

10. What if Our Organization Isn’t Ready for Full Outsourcing? Are There Other
Options We Can Leverage?

Business process outsourcers offer utilities multiple options for increased
system performance without committing to full outsourcing.

Outsourcer as Consultant: Outsourcers often inherit complex legacy systems
and process problems that require innovative approaches to resolve. Hence they
have a heritage of improving the utility assets they assume. When utilized in
a consulting capacity, outsourcers often bring a more pragmatic approach to
resolving real-time problems than strategy firms.

Managed Services: Utilities can engage an experienced outsourcer through
a managed service agreement. In this scenario, the IT assets and the utility’s
knowledge of legacy operating environments remain with the utility while the
outsourcer fills internal resource gaps on an as-needed, cost-effective basis.

Service Performance: Alternately, a utility may choose to use the outsourcer
for targeted, short-term business services. For instance, the utility may opt
to use the outsourcer’s expertise in operating call centers to lower costs and
improve customer service quality. Call centers are excellent candidates for
this type of approach, as outsourcers often employ automation and process-improvement
methodologies that bring costs and quality in line with utility objectives.

For More Information
This article is an excerpt of “Business
Process Outsourcing: 10 Questions Utility Executives Should Consider.” To obtain
a complimentary copy of this 25-page white paper, please contact Kim Kanady
via email at utilityservices@alldata.net.

 

Using Service-Oriented Architecture to Transform Utilities

Since the 1970s, utility companies have grown principally through mergers and
acquisitions. This industry consolidation has resulted in organizations with
diverse business processes (e.g., customer care, outage management, asset management,
financial management) reflecting the legacy operating company practices. Associated
with this business process “portfolio” is a software “portfolio,” including
homegrown and purchased applications (e.g., customer information systems, general
ledger systems, work management systems). This business and technology redundancy,
with long investment recovery timelines, has resulted in significantly higher
operating and maintenance costs.

Over time, utility companies have found that these sets of duplicative business
process and technology portfolios result in operating inefficiencies and are
difficult to adapt to new business requirements (e.g., operational excellence
programs, new customer programs, leveraging new technologies) as they emerge.
These requirements are driving companies to transform their business processes
and the associated technology foundation.

Transformation Is the Key to Improving

As utilities have refocused on their core businesses, they have embarked on
business process transformation initiatives to realize capital and operating
benefits. These transformation initiatives typically include rationalizing similar
processes across the company – especially those that span different operating
areas. For example, many utilities continued to sustain multiple independent
customer call centers – with separate policies, procedures and technology –
for each of the “legacy” operating companies. A common business transformation
initiative is to consolidate the call-handling processes and then to combine
these call centers to allow the same resources to support multiple operating
companies. This rationalization process enables the company to better balance
its workload across the call centers.

Some companies adopt an approach that is targeted to a specific business process,
such as customer service, as described above. This focus allows companies to
establish an initial process, or methodology, to rationalize business processes
and the associated support systems. These companies then move into another area
to refine the methodology to continue the transformation and evolve to a Service-
Oriented Architecture (SOA) platform. In many cases, this focused approach is
coupled with a required technology upgrade or regulatory mandate.

Other companies adopt a broader, enterprisewide approach that includes multiple
business processes as part of a higher-impact corporate transformation to improve
operating effectiveness, customer service or shareholder returns. As a result,
these companies embark on an integrated program and methodology to transform
their businesses, supported with a proactively designed and built SOA environment.
These companies understand that while these methods and environments will serve
as the initial foundation, this foundation will continue to evolve as the company
moves ahead with its transformation activities.

A key element to enable this business process transformation is technology.
Given a diverse application software portfolio, a new technology platform based
on SOA offers key benefits:

  • Leverages existing technology investments. SOA is a system design
    technique that does not require wholesale replacement of business systems.
    Rather, SOA focuses systems on individual capabilities or services. Business
    and technical architects are then able to expose these services from existing
    systems in a manner that supports SOA designs. As a result, companies are
    able to continue leveraging existing technology investments such as enterprise
    resource planning, customer information, asset management and work management
    applications.
  • Offers business process and technical flexibility. SOA also supports
    more flexible business and technical designs. This flexibility is driven by
    the consistent business processes resulting from transformationmodeling activities.
    These processes are then encoded in SOA services. As a result, corporate restructuring
    and partnerships (e.g., adding a generation facility, merging with another
    company, integrating with a new company) become easier through the use of
    these consistent services because one side of the business transaction is
    already well-defined. The key to enabling this flexibility is the collaboration
    between business and technology teams. This collaboration will help ensure
    that the technical solution is positioned to meet not just current, but also
    future, needs.

Before
starting a transformation process, a company should have a sound understanding
of the business operations and functions. Most utilities have not documented the
myriad business processes in use across the enterprise. Rather, there are multiple
utility industry business models including the International Electrotechnical
Commission’s (IEC’s) Common Information Model (CIM) specification[1] and IBM’s
Component Business Model (CBM) for Energy and Utilities (see Figure 1).

The CIM was developed with the participation of multiple utility companies,
technology vendors and standard bodies. In recent years, it has received increasing
interest from utilities seeking to rationalize their technology portfolios and
improve their independence from technology vendors.

IBM’s CBM was developed as part of a structured process that focuses on high-value
transformation efforts that are enabled through SOA. A key distinction between
CIM and CBM is that CIM models utility industry business processes, while CBM
is integrated into an overall methodology that yields a transformation road
map to deliver business value. However, both models portray typical utility
industry processes and should be considered as a starting point for developing
a utility company’s individual SOA design. For example, a company that is targeting
improved asset optimization might focus on the distribution network asset optimization
and asset design activities in the IBM CBM for utilities.

Key Elements of SOA?

SOA is a technical concept. Therefore, it can be reflected as shown in the
reference architecture (see Figure 2).

The business services shown in Figure 2 are intended to provide an initial
context for the business service classification. They include interaction services,
process services, information services, access services, partner services, business
application services, business innovation and optimization services, IT service
management services and development services.

Interaction services provide the capabilities required to deliver IT
functions and data to end users, meeting the end users’ specific usage preferences.
This includes traditional channels, like Web browsers and portals, as well as
pervasive devices such as mobile phones and PDAs.

Process services provide the control services required to manage the
flow and interactions of multiple services in ways that implement business processes.

Information services provide the capabilities required to federate,
replicate and transform data sources that may be implemented in a variety of
ways. Many of the services in an SOA are provided through existing applications;
others are provided in newly implemented components; and others are provided
through external connections to third-party systems.

Access services enable access to existing enterprise applications and
enterprise data from the Enterprise Services Bus (ESB). A set of access services
provide the bridging capabilities between legacy applications, prepackaged applications,
enterprise data stores and the ESB. These access services expose the data and
functions of the existing enterprise applications in a consistent way, allowing
them to be fully reused and incorporated into functional flows that represent
business processes. Existing enterprise applications and data leverage the business
application and data services of their operating environments, such as CICS,
IMS, DB2, etc. As these applications and data implementations evolve to become
more flexible participants in business processes, enhanced capabilities of their
underlying operating environments – for example, support of emerging standards
– can be fully utilized.

Partner services provide the document, protocol and partner management
capabilities required for business processes that involve interactions with
outside partners and suppliers. This includes, for example, the handling of
industry-specific document and message formats.

Business application services provide runtime services required for
new application components to be included in the integrated system. These application
components provide new business logic required to adapt existing business processes
to meet changing competitive and customer demands of the enterprise. Design
and implementation of new business logic components for integration enables
them to be fully reusable, allowing them to participate in new and updated business
processes over time. The business application services include functions important
to the traditional programmer for building maintainable, flexible and reusable
business logic components.

Business innovation and optimization services incorporate monitoring
capabilities that aggregate operational and process metrics to efficiently manage
systems and processes. Managing these systems requires a set of capabilities
that span the needs of IT operations professionals and business analysts who
manage the business operations of the enterprise.

These capabilities are delivered through a set of comprehensive services that
collect and present both IT and process-level data, allowing business dashboards,
administrative dashboards and other IT-level displays to be used to manage system
resources and business processes. Through these displays and services, it is
possible for the business and IT personnel to collaborate to determine, for
example, what business process paths may not be performing at optimum efficiency,
the impact of system problems on specific processes or the relationship of system
performance to business process performance. This collaboration allows IT personnel
and assets to be tied more directly to the business success of the enterprise
than they traditionally have been.

IT service management services include capabilities that relate to scale
and performance; for example, edge services, clustering services and virtualization
capabilities that allow efficient use of computing resources based on load patterns.
The ability to leverage grids and grid computing are also included in this category.

While many of the IT service management services perform functions tied directly
to hardware or system implementations, others provide functions that interact
directly with integration services provided in other elements of the SOA through
the ESB. These interactions typically involve services related to security,
directory and IT operational systems management. The security and directory
services include functions involving authentication and authorizations required
for implementing; for example, single sign-on capabilities across a distributed
and heterogeneous system. Monitoring allows managing service-level agreements
and the overall health of the system, which provide security, directory, IT
system management and virtualization functions.

Development services are an essential component of any comprehensive
integration architecture. The SOA includes development tools, which are used
to implement custom artifacts that leverage the infrastructure capabilities
and business performance management tools, which are used to monitor and manage
the runtime implementations at both the IT and business process levels. Development
tools allow people to efficiently complete specific tasks and create specific
output based on their skills, their expertise and their role within the enterprise.
Business analysts who analyze business process requirements need modeling tools
that allow business processes to be charted and simulated. Software architects
need tool perspectives that allow them to model such elements as data, functional
flows and system interactions. Integration specialists require capabilities
that allow them to configure specific interconnections in the integration solution.
Programmers need tools that allow them to develop new business logic with little
concern for the underlying platform. Yet while it is important for each person
to have a specific set of tool functions based on his or her role in the enterprise,
the tooling environment must provide a framework that promotes joint development,
asset management and deep collaboration among all these people. A common repository
and functions common across all the developer perspectives (e.g., version control
functions, project management functions, etc.) are provided in the SOA reference
architecture through a unified development platform.

The ESB serves as the communications channel for sharing services among the
different systems. A challenge for technical teams is to clearly define the
services to be exposed over the ESB. By exposing too many services, the design
becomes overly complex and ineffective due to the large number of services that
are available. By exposing too few services over the ESB, the design includes
“compound” services that are also difficult to reuse because too many functions
are bundled together.

By using business process models such as the CIM or CBM and the SOA reference
architecture model, companies can develop a reusable services portfolio to serve
as the foundation for the SOA design. These services can be used to integrate
existing technology investments and ultimately provide a flexible technology
environment that can be adapted – more efficiently and effectively – to meet
new business requirements. A partial list of services for a typical utility
company based on CBM, and compatible with the CIM, is presented by business
area in Figure 3.

Conclusion

So, what we have seen is that utility industry consolidation has resulted
in having diverse business process and technology portfolios within the same
corporate enterprise. This duplicative business environment leads to inefficiencies
that may affect customer service and reliability, limit shareholder returns
and limit business flexibility. Business transformation is the only viable approach
to resolving these inefficiencies.

Business transformation starts with corporate recognition of the need for change.
This change, when implemented with an SOA design, provides a flexible platform
that continues to leverage existing investments to help improve service delivery,
business flexibility and, ultimately, shareholder returns.

 

Business Process Management in Energy

Forced to deliver expected shareholder returns in stagnant markets, energy
utilities are turning their attention to performance improvement. The drive
for operational excellence is forcing energy companies to optimize complex processes
spanning several lines of business. Business process management, with its compelling
value proposition (to automate and optimize business processes across the enterprise),
holds promise to significantly affect corporate effectiveness. However, early
steps can be burdened with organizational and technological challenges.

BPM for Operation Excellence

Pressure to achieve earnings growth rates considerably larger than the sluggish
native customer/load growth rates is forcing energy companies to explore performance
improvement as their growth engine. After an initial focus on the low-hanging
fruit of cost reduction, energy companies are now turning to innovation through
business process automation/ optimization. Consequently, business process management
(BPM), with its promise to orchestrate and optimize complex utility business
processes (end-to-end), based on relevant information obtained through real-time
analytics, is gaining more attention as a means to increase corporate agility.

Although optimization of complex business processes spanning numerous lines
of businesses (LOBs) offers the largest improvement opportunities, it also poses
the greatest technical and organizational challenge. Because fragments of an
end-to-end business process tend to be enabled with compartmentalized LOB applications
tied together with custom, static and inflexible interfaces, such an environment
does not enable the adaptive process modification needed to quickly respond
to environmental changes. In addition, such processes lack clear ownership across
the enterprise, making end-to-end process automation an organizational/labor
issue. Energy companies, in their quest for process improvement (e.g., Six Sigma,
Lean, TQM) to achieve higher adaptability, efficiency and effectiveness, must
adopt and nurture process orientation as a strategy instituted from the CXO
level. To create a technology platform for performance improvement and facilitate
energy enterprise sense-and-respond behavior, energy IT organizations (ITOs)
must embrace service orientation as an enabling architecture. Using service-oriented
architecture (SOA), compartmentalized applications (or their modules) that support
components of the fragmented end-to-end business processes (wrapped in the Web
services envelope) can be exposed to BPM tools for optimal process choreography.

Leading energy companies are starting to embrace process orientation, while
embarking on process-improvement exercises, to increase organizational efficiency
and effectiveness. This initial adoption phase is frequently riddled with unclear
ownership of BPM initiatives (e.g., is it an LOB or ITO endeavor?). Myriad software
providers using the BPM moniker are adding to the confusion, resulting in compartmentalized
projects driven by localized business needs and owned by a particular LOB. In
most cases information technology organizations are on the sidelines trying
to control proliferation of the niche BPM tools and vendors.

As the emerging BPM market starts to consolidate, the process model will become
widely accepted, standards-based engines will replace proprietary ones; bigger,
safer players (e.g., IBM, Microsoft, SAP, BEA) will fully develop their BPM
platforms; and BPM initiatives in energy will be raised to the enterprise level.
This change will help clarify the role of the ITOs (and CIOs) as custodians
of the IT infrastructure, enabling corporate transformation in an adaptive and
more agile enterprise with BPM capability as one of the key requirements for
energy companies focusing on performance improvement and innovation.

The BPM Dichotomy

BPM’s dual nature (business and technology) creates a dichotomy responsible
for unclear ownership of its initiatives as well as two types of roadblocks
to successful implementation.

In vertical industries such as energy, numerous examples exist of specific
end-to-end business processes that are not automated with standard enterprise
systems (e.g., ERP). The simplest means of creating value in this circumstance
is to use process automation to automate manual portions of business processes
and their interfaces to enterprise applications and systems. LOBs view BPM tools
as an end-user technology that should be owned by the business to facilitate
process automation and replacement of manual processes, such as exception handling,
which are not tracked through formal automation systems embodied in the enterprise
application. Although this approach can provide significant cost reduction and
reduce errors inherent in those tasks, the scope of this automation is not the
complete process; it is only a portion of it reflected in these manual steps.

On
the other hand, energy ITOs are seeking new methods for developing software
and system automation. The convergence of Web services (promising universal
connectivity) and model-driven development and architectures (promising technology-neutral
system development) paints a compelling vision of the future of system development.
This future will be process-oriented, with systems created by using a process
model to direct the interaction of various systems and human actors. The systems
will be accessible because their functions are exposed as services, and the
process engine will be sophisticated enough to capture all the semantics of
the business process at various levels. In this vision, energy ITOs are seeing
BPM as an orchestration engine/platform deployed and owned by the ITO, which
will enable transformation into Web services and SOA.

The dichotomy of BPM is also reflected in the types of organizational and technology
roadblocks energy companies face. The main business challenges for successful
BPM deployment are ownership and stewardship of business processes. Unless an
energy company has taken an aggressive, process-oriented view toward its business
there is often confusion about process definition and ownership. Because the
processes where automation can provide the most value often span functional
areas, there are usually no individuals with responsibility for the overall
process. Instead, there are functional managers, each with individual responsibility
for the subprocess performed by their areas. Effectively automating these processes
requires the creation of new channels of communication as well as new decision
processes to enable the organizations involved to reach agreement about how
to handle the processes.

The technical challenge comes primarily from the fact that it is an emerging
technology based on Web services, which are not complete. Although substantial
progress is being made regarding the standards for and interoperability of Web
services, the practical use of Web services within energy companies and vendors
provisioning niche applications is in its early stage. The universal connectivity
that is required to link a process execution engine with the various actors
and systems in the environment requires a substantial investment in integration
technologies, and minimal integration among various components in the infrastructure
demands a substantial investment in the software platform. It also requires
legacy applications and vendor-delivered, commercial off-theshelf software to
be retrofitted (by componentizing and wrapping modules in Web services envelopes)
to be able to operate in an SOA.

BPM and Composite Applications

The breakdown of U.S. energy market restructuring (epitomized by Enron’s “creative
accounting” on the wholesale side and California’s deregulation debacle on the
retail side) has forced North American energy companies to curtail “risky energy
ventures” and get back to basics. Correspondingly, this has created a trend
toward rebundling the retail and network distribution businesses into entities
covering both the regulated retail and distribution segments, with operational
excellence as the key value discipline.

Our research indicates that current commercial, off-theshelf applications in
energy, developed to support the need of unbundled retail/network companies,
cannot adequately support an operational excellence pursuit. To achieve operational
efficiency after harvesting the low-hanging fruit of cost reduction and process
automation, integrated distribution companies need applications that can be
orchestrated beyond automated work flow into crossdepartmental process optimization.
Ideally, the goal of complex process optimization should be achieved by leveraging
existing applications rather than by creating the next generation of monolithic
ERP-like environments (which energy companies, driven by the sector’s low credit
rating and access to capital problems, are not likely to pursue). Users aiming
to achieve operational excellence through cross-departmental process optimization
must explore BPM technologies as vehicles that will transform current application
portfolios into a service-oriented architecture.

Energy companies, enabled by recent developments and projected trends in enterprise
application integration, Web services, business process management, business
performance management, business intelligence and next-generation analytics
architecture, which supports real-time analysis of key performance indicators
(KPIs), will begin the transformation into real-time enterprises. During that
time business process management will play the key role among technologies enabling
energy companies’ sense-and-respond transformation. In this phase, leading energy
application vendors will facilitate BPM use to orchestrate and optimize complex
cross-departmental processes by “beefing up” analytical architectures and exposing
KPIs for process optimization (e.g., “real-time” SAIDI and SAIFI determined
by the number of customers currently affected as an input into crew scheduling
during outage restoration). Following the emergence of mature retail restructuring
models during and maturation of Web services technology, we expect leading vendors
to disaggregate monolithic applications (such as customer information systems)
into services and rebundle them into composite application environments using
a combination of native functions and external services (e.g., Web services
across enterprise and business partner environments). The BPM capability within
the composite architecture environment will be one of the key requirements for
energy companies focusing on operational excellence.

BPM Components

The following are BPM functional components energy companies need to consider:

  • Process modeling: This component provides a graphical tool for modeling
    energy business processes in the as-is and to-be states. Models can also be
    tailored to depict best practices for exception handling or be prepackaged
    to reflect energy industry-specific needs. The visual representation (e.g.,
    swim-lane diagrams, UML models) must enable a business user (not a developer)
    to model the process from a business, not a programming, perspective. Different
    tools support various business process description semantics (i.e., proprietary
    approaches versus emerging standards such as BPEL, BPML and BPSS). Process
    modeling is often bundled with a process orchestration engine.
  • Process improvement methodology: Aligning the energy company enterprise
    business strategy with a process improvement program is a critical success
    factor. Many modeling tools have incorporated support for business-oriented
    improvement methodologies (e.g., Six Sigma, lean thinking, business process
    integration and management, CPI, Balanced Scorecards).
  • Process orchestration engine: A process orchestration engine (POE) takes
    runtime instructions from a process model. To date, these engines have been
    fairly proprietary. Many are migrating to support emerging description and
    execution standards (e.g., BPML, BPEL4WS), yet few of these are commercially
    available. In general, POE vendors cover one or more of the automation categories
    (see Figure 2):
  • Business rules engine vendors: Most of the business POE vendors provide
    lightweight business rules engines embedded in their tools. In other words,
    the execution engine uses business rules as input (i.e., its runtime instructions).
    Business rules represent decision choices that a user or a system will make
    based on a set of conditions. The basic technology has evolved so that user
    decision points in work flows are surfaced (often via a graphical model or
    alerts sent to a user).
  • Integration servers: These tools bind the abstracted business process to
    the data, documents, business logic, messages and events needed by the process.
    Adapters connect the integration server under the orchestration engine controller
    to structured data and logic in the underlying applications, unstructured
    data from the content management environment, search terms (vocabularies)
    from a taxonomy library and messages/events from message queue managers. The
    transformation capabilities of these tools provide semantic reconciliation
    across components as required. In addition, these tools provide basic transport
    and routing of information and events through the process.
  • Process monitoring and analysis: The primary function of these tools is
    to enable the analysis of live data as it moves through the process. There
    are two aspects to this real-time activity monitoring and analysis: 1) analysis
    of the process itself for optimal design (completeness and bottlenecks); and
    2) monitoring the operational process’ performance for predefined KPIs and
    notifying users of out-of-tolerance limits. This provides the opportunity
    to define and initiate corrective actions. The tool may also provide an end-user-facing
    dashboard with graphics in a visual display of KPIs for decision makers.
  • Process simulation/optimization: These tools are used to simulate the business
    process through multiple options, discovering bottlenecks and creating alternatives.
    Allowing the previously mentioned analytics to be captured and used as input
    into the simulation creates a more efficient process (via both design and
    runtime feedback). Some products allow a process to be simulated against production
    conditions to provide feedback.

Conclusion

The drive for operational excellence and improved financial performance will
force energy companies to integrate and optimize business processes across functional
areas while using existing legacy applications. Leveraging Web services, emerging
composite application architectures and efforts to define common semantics across
energy domains, BPM – with its inherent capabilities to integrate, orchestrate
and optimize complex business processes – will start transforming energy companies’
legacy applications into SOA environments. Energy companies must adopt process
orientation and work on establishing clear ownership of complex end-to-end business
process improvement initiatives. Energy IT organizations must establish a lasting
partnership with the business to create a corporate performance improvement
IT platform based on BPM technology.

 

Must-Have Technology

High commodity prices and volatility are driving growth in energy trading as
producers try to maximize their market share and profitability and buyers attempt
to control their energy costs and risks. Financial institutions have stepped
up both physical and financial trading, adding liquidity to the market and assuring
that volumes will stay high. In order to maintain their competitive edge, utilities
and independent power producers will need to invest in more sophisticated information
technology. However, that investment must be effective – reducing IT support
costs, while increasing the volume of deals per trader.

Energy Prices Will Remain High; It Is a Matter of Supply and Demand

In
recent years, the world has seen an unprecedented increase in the price of crude
oil, caused primarily by the narrowing of the gap between global supply and
demand. Specifically, surging demand in China coupled with supply disruptions
associated with the war in Iraq, multiple hurricanes in the Gulf of Mexico and
the oil workers’ strike in Venezuela have driven historically high oil prices.
It is expected that the supply/demand gap will continue to be tight into the
near future. The increase in oil has brought subsequent increases in the prices
of natural gas and electricity, where generated by gas-fired turbines (see Figure
1).

In North America, a similarly tight supply/demand situation for natural gas
is driving historically high wellhead gas prices. Granted there has been some
relief from a mild winter, but weather is an uncontrollable factor. Increasing
demand (driven in large part by the power generation business), declining production
from existing gas fields and the sluggish development of liquefied natural gas
(LNG) capacity, are contributing to forecasts of increasing prices and volatility
during the next five years.

The Pace of Energy Trading Is Quickening as the Markets Recover

Energy trading is increasing in importance. Trading volumes are up in almost
all of the markets for energy commodities, whether these are for liquid hydrocarbons
(crude oil, natural gas liquids, etc.), natural gas, power, coal or energy derivatives.
Trading exchanges such as Intercontinental Exchange (ICE) and the New York Mercantile
Exchange (NYMEX) have set records for numerous products. Despite the slowdown
in formation of new Regional Transmission Organizations (RTOs), volumes on existing
spot wholesale power markets are up. Brokerages also are reporting increased
activity in over-the-counter (OTC) energy.

The Enron Legacy Has Left Its Mark; Regulatory Scrutiny Remains High

Energy markets – especially power and natural gas – face challenges subsequent
to the collapse of Enron. That collapse decreased the liquidity of the energy
markets, as Enron as a market maker generated significant liquidity, especially
through Enron on Line (EOL). Investigations into shadow or wash trading practices
extended to other energy companies. About the same time, practices of price
reporting by gas traders were called into question, leaving the market with
doubts about the transparency and credibility of market reporting.

The response to corporate excesses has led to increased regulatory scrutiny
in North America.

  • Congress passed the Sarbanes-Oxley Act, requiring corporations to strengthen
    internal controls and audit procedures, as well as report to shareholders
    events likely to impact their financial performance;
  • The Commodity Futures Trace Commission (CFTC) has issued more than $50 million
    in fines to gas traders and companies for wash trades and manipulative price
    reporting to index publications; and
  • The Federal Energy Regulatory Commission (FERC) initiated actions against
    companies involved in the California energy crisis and also issued new regulations
    requiring quarterly wholesale trade reporting. FERC has also issued new sanctions
    for market manipulation.

In an effort at self-policing, the energy industry formed the Committee of
Chief Risk Officers (CCRO). That organization defined best practices in risk
management, including a common methodology for arriving at value-atrisk, in
addition to establishing disclosure standards for the industry.

Financial Institutions Supply Liquidity, But Have the Edge

The year 2005 saw a continuation of active participation by financial institutions
in not only financial but in physical trading. These institutions are acquiring
the assets to back trading to do just that. According to SNL Energy, financial
buyers made up over 80 percent of the nameplate capacity acquisitions in 2004
and 2005.

Not only do financial institutions have leverage, but they have years of experience
using sophisticated information technology to support their activities. These
companies have advanced technology for credit and currency risk, along with
well-developed straight-through-processing STP. STP is the ability to access
and track deal data through front, middle and back office. The idea is that
a system of record is created with role-based access for those needing the information.

Granted, financial institutions entered physical trading lacking requisite
technology for specifically assessing energy commodity risk and physical delivery,
such as scheduling, nominations and settlement. However, these institutions
are acquiring that technology quickly either through buying software or acquiring
the trading desk of energy companies outright. Witness the acquisition of Energy-Koch
by Merrill Lynch and the partnership between Calpine and Bear Stearns to form
CalBear.

The financial health of the utility industry is improving; however, there are
limited opportunities for growth. The traditional utility can look for growth
in only a few places – growing the customer base through mergers and acquisitions
or through acquiring more customers through retail competition. Since restructuring
has slowed significantly in North America, the second avenue is not an option.
Trading and arbitrage do provide opportunities for growth. Merchant generation
companies must be engaged in trading just to do business.

At the same time, Enron revealed just how exposed energy companies are in their
dealings with counterparties. Part of the difficulty lies in the inability to
identify deals across commodities. One company may be selling electricity to
another, but buying gas from that same company. Most companies are still not
equipped to understand their total exposure because deal information does not
flow between units responsible for trading each type of commodity. Add to that
the fact that for large deals there are relatively few counterparties, and credit
agencies have high capital adequacy requirements for this industry.

Trading requires information technology. According to one risk manager at an
independent power producer, “My business depends entirely on technology and
access to information.” In the sections that follow, the required capabilities
and the must-have technologies for competitive edge are described. However,
before setting out the case for technology, it is necessary to describe the
business process it supports.

The business process involves constant information flow between front, middle
and back offices. Although exchanges make the market, today emphasis is being
placed on what goes on within the enterprise. Trading and risk management functions
are performed by front, middle and back offices. Each has different needs for
data and analytics:

  • Front office – where all phases of deal execution occur. Front office
    is responsible for new product development, execution and price curve development.
    Front office typically requires near-real-time (within minutes) information
    (commodity prices, weather forecasts, forward price curves, transmission pricing,
    spot market pricing, emissions status, availability of credit and financing,
    etc.) from internal applications, sensors and external data feeds. Analytics
    tools must be easy for traders to use in calculating the best deals.
  • Middle office – where exposures and risk are examined and controlled.
    Middle office is responsible for forecasting, deal validation and risk management.
    Middle office typically builds limits into the traders’ systems, so real-time
    alerting capabilities are not as critical, but data streams from external
    sources are. Visibility to trading activity and available credit are critical.
    Middle office has a need to receive information that will impact financial
    performance within 48 hours. Middle office also requires the calculating capacity
    to process large volumes of data within hours, rather than days.
  • Back office – where settlement and accounting occur. The back office
    is responsible for reconciliation and compliance with accounting and disclosure
    requirements. Back office requires receipt of information into settlement
    and ERP systems on a timely basis, but does not require real-time information.
    However, the more quickly deals can be processed, the faster the next trade
    can be made.

Now,
most interaction between the offices (front, middle, back) occurs either via
face-to-face contact on the trading floor, phone or fax, not necessarily through
electronic alerting or transaction systems. Traders (front office), risk managers
(middle office) and schedulers (front office) typically are co-located on the
trading floor to facilitate communication. Personnel involved in settlement
(front office) and forecasting (front or middle office) are often housed separately.
As trading volumes increase, trading companies will rely more on portals to
gain role-based access to information and analysis.

In the early years, there was a common understanding about the business process
involved in energy trading and risk management, but this process was largely
undocumented. The CCRO has since documented the process and we use their outline
as a foundation for the process from new product development in the pre-deal
stage to final settlement in the post-deal stage. Supporting these business
processes are a set of applications and application development and deployment
tools (see Figures 2 and 3). We refer to these technologies as Energy Trading
and Risk Management (ETRM).

Must-Have Technologies Include Applications, Data Services and Supporting Technologies

The technology to support the information needs of a profitable business is
certainly mature. At the very least, utility companies need to invest in applications,
especially if they are on older legacy applications that are undocumented, costly
to support and do not provide accountability and traceability. The only decisions
now to be made by companies participating in trading is how much they are willing
spend. For the enterprise, capabilities required are:

  • Enterprise access to data, transactions and analytics. While some
    companies are still focused on putting together market information and trading
    tools on a portal for the front office, the more advanced companies are concentrating
    on building enterprise portals that provide role-based access not only to
    front, middle and back office, but also to operations and corporate. The most
    valuable portals will not just assemble streaming data, but will display graphics,
    use geomapping of price and other data and most importantly the ability to
    run analytics that provide business intelligence specific to ETRM. For the
    executive level, a portal alone is not enough; dashboards with key performance
    indicators are needed to support business performance management. For compliance
    purposes, traceability – the ability to replicate results and reporting –
    are essential just to do business.
  • Energy and credit risk analytics. Trading energy commodity is not
    like trading financial products. The commodity tends to be lumpy and for some
    commodities like power and natural gas, very tied to regional markets. In
    addition, there are regulations specific to these commodities. Generic analytics
    cannot be used to establish value – risk management applications specific
    to energy commodity are required. On the other hand, the industry can take
    advantage of more generic credit risk analytics used by the financial services
    industry for financial instruments.
  • Integration capabilities and service-oriented architecture. ETRM
    depends upon the integration of many applications, especially in oil and gas
    where IT organizations currently run hundreds of applications. Integration
    capabilities are essential for STP. While there are opportunities for consolidation
    of redundant systems, there is not one comprehensive integrated application
    that can handle all functions – niche applications are still required. Currently,
    the norm for integration is enterprise application integration (EAI) technology.
    Along with integration buses comes alerting capabilities, critical for companies
    that must be able to confirm deals quickly, so as not to lose opportunities.
  • Web services wrappers will be used in the utility industry to help
    integrate legacy applications, until these can be replaced. ETRM software
    vendors are already developing service-oriented architecture (SOA), enabling
    users to build integrated composite applications. Energy companies will want
    to have an SOA supporting the enterprise architecture as well, especially
    if these companies trade in multiple markets and commodities.
  • High-speed calculating capacity. The middle office needs to be able
    to run simulations, stress testing and optimization routines involving numerous
    variable and large data sets within hours rather than days. Analytics for
    ETRM are fairly complex, involving numerous variables operated on by sophisticated
    algorithms, requiring multiple iterations. Although long-term planning may
    allow waiting a day or two for a process to run, other activities require
    quick calculations performed in minutes rather than days. Particularly complex
    calculations may require at the very least mixed-integer linear programming
    and at best high-performance computing (HPC) capability provided via parallel
    processing.

Planning is required. As with other projects, investing in technologies for the
sake of maintaining competitive edge is not enough. It is important for the business
leaders in energy trading and risk management to work closely with IT to ensure
that the investment in technology results in increased capacity (deals per trader)
as well as decreased risk. Hiring additional IT personnel to build custom programs
is not a winning strategy in the long run.

 

Making Decisions With Data

How well does a utility perform its distribution operations? The answer to
that question depends in part on what you are trying to achieve. Most utility
stakeholders (customers, regulatory bodies, local governments, staff and owners)
formulate an answer based on some combination of reliability, cost, power quality
and safety.

Let’s review how these factors can be evaluated as they relate to the performance
measuring, monitoring and reporting challenges facing today’s electric distribution
company.

From Data to Decisions

Critical to the evaluation of the success or failure of any business process
is accurate knowledge of how the process performed in the past, how it is performing
now and how it is likely to perform in the future. That knowledge can arise
only from data.

But amassing this data, transforming it into knowledge and communicating it
to stakeholders can be a daunting task. This raw data must be captured and transformed
into meaningful information that can be understood by the stakeholders. The
stakeholders need to then process the information in the context of their existing
knowledge, building a broader knowledge base from which to make decisions about
tactical corrections and strategic modifications.

Once this knowledge has been cultivated, cause-and-effect relationships can
be investigated to determine the drivers behind absolute performance and performance
trends. The electric distribution company is no exception to this proven method
of achieving organizational excellence.

The Complexity of Data Assembly and Analysis

Measuring
In the area of electric distribution operations, key performance indicators
(KPIs) have traditionally taken the form of engineering calculations that provide
an indication of how well the delivery system itself (“the wires”) succeeds
at “keeping the lights on” and how well operations responds when the inevitable
failure does occur (restoration). These measures are commonly referred to as
reliability indices and are intended to measure performance from the electric
customer’s perspective (i.e., customerbased measures).

Distribution operations typically “perform” in one of two modes of operation
– normal or abnormal. Abnormal operation stems from the fact that electric distribution
systems operate in harsh environments that are beyond the control of the utility.
These systems are inherently exposed to nature’s elements, including animals,
vegetation and weather, as well as geological and human incidents. Since it
is cost prohibitive to build a system that operates flawlessly in this environment,
each utility has adopted design, construction and maintenance standards based
on an acceptable trade-off between cost and performance as determined by their
regulators and/or customers served.

Incorporated in the development of these standards are limits related to the
harshness of the environment that loosely define “normal” operating conditions.
The “lights are expected to stay on” while operating within these limits. The
environmental harshness will occasionally exceed these limits (termed “major
event”), changing the operating mode to abnormal and altering performance expectations.

Standard Practices
Consistent measurement practices with defined methodologies and terminology
are critical to meaningful tracking and analysis of electric distribution performance.
Not only required in support of internal distribution company decision making,
analysts, investors, regulators, owners and large commercial/industrial electric
customers regularly compare performance between different companies in support
of their own internal decision making.

The
Institute of Electrical and Electronics Engineers (IEEE) is the commonly accepted
authority on standardization in this area and has published the IEEE Guide for
Electric Power Distribution Reliability Indices IEEE Std. 1366™. Industry surveys
conducted by the IEEE/PES Distribution Subcommittee Working Group on System
Design have set the most commonly used customer-based indices (see Figure 1)
and in order of frequency of use. These calculations transform raw interruption
data typically logged during the day-to-day operation of the electric distribution
system into meaningful information.

Transformation of this information into a knowledge base requires further augmentation
with operational data related to the cause of each interruption, operating mode
at the time of each interruption, type of isolating devices involved in each
interruption, and type of any electric system component that may have failed.

A recently submitted white paper by the IEEE/PES Distribution Subcommittee
Working Group on System Design, titled, “Collecting and Categorizing Information
Related to Electric Power Distribution Interruption Events: Data Consistency
and Categorization for Benchmarking Surveys,” further defines a minimum set
of data collection categories required for benchmarking (see Figure 2).

The resulting collection of knowledge provides a set of powerful information
on which to base operational, engineering and financial decisions across the
entire electric distribution enterprise.

Monitoring and Reporting
Simply measuring performance and building a base of knowledge does nothing,
in and of itself, to affect performance. Performance must be monitored over
time for the measures selected to be useful in steering processes in a direction
that will result in organizational success. Winning decisions require knowledge
of “where we are” as well as “where we are heading.”

Performance monitoring and reporting methods vary depending upon the business
needs of the particular function or process that is to benefit from the available
knowledge. Required periodicities of access and information granularity are
important factors in determining the best methods for the accumulation, structure
and dissemination of knowledge. Static annual reports of KPIs may suffice for
some planning functions but operations management functions typically demand
dynamic access to current information; not only to the KPIs, but also to the
raw data that surrounds and impacts the KPIs.

A diverse range of performance monitoring and reporting needs exist within
the electric distribution enterprise:

  • Predefined reports of aggregate information about performance over relatively
    long periods of time (quarter-to-quarter, year-to-year) typically meet the
    needs of performance-based regulation and regulatory compliance reporting;
  • System planning and design functions need similar reports of like information
    but also benefit from more interactive reporting methods;
  • The ability to view information from various system perspectives is necessary
    to support reliability planning efforts;
  • Maintenance functions typically require more granular information about
    performance over somewhat shorter periods of time (week to week, month to
    month) and need even more interaction with the raw data;
  • The ability to view information from various equipment perspectives is necessary
    to support reliability-centered maintenance efforts; and
  • Operation planning functions require very granular performance information
    over even shorter periods of time (hour to hour, day to day) and need maximum
    flexibility in reporting.

All of these functions can benefit from, if not require, the ability to filter
specific abnormal occurrences or accepted normal occurrences from the KPIs and
other information reported. For example, comparison of the performance of a
particular system design against related standards could be skewed if data from
events occurring outside of the predefined limits within the standard (e.g.,
major events or scheduled interruptions) are not excluded. This could result
in unnecessary and costly upgrading and overbuilding of the distribution infrastructure.
Obviously, the definitions of excluded occurrences are critical to accurately
evaluating and comparing KPIs and related information.

When
a major event occurs on the electric system resulting in customer interruptions,
the required periodicity of access to information increases to less than hour
to hour, the required granularity of information increases significantly and
the number of interested parties increases by orders of magnitude.

  • Operations management and support personnel need current information, including
    information down to the individual electric customer level, on which to base
    the minute-to-minute decisions necessary to assist operators/dispatchers in
    quickly and safely restoring electric service;
  • Corporate communication personnel need similar information for dissemination
    to the media and response to direct public contact;
  • Planning, design, maintenance and other personnel may be called upon to
    lend assistance in developing restoration plans and communicating with external
    entities; and
  • Depending on severity of the event, regulatory and emergency management
    organizations may require access to current information.

Technology Challenges

Measuring, monitoring and reporting performance presents several technology
challenges that cannot be adequately addressed by traditional operational IS
approaches. Although additional users could be added to the operational system
to gain access to the operational data, each additional hit to the operational
database degrades performance, however minimally, of the application programs
that depend on the operational database. Users interested in the data, as opposed
to the application program results, will typically generate many, many more
hits per unit of time than the application programs for which the database was
designed. Furthermore, these additional hits will tend to occur during the same
time as maximum usage of the application programs (normal working hours).

Neglecting IS performance, direct cost and physical implications, opening access
to the operational data on a broad scale through the operational system presents
additional security risks that could directly and negatively impact the operational
system itself. Preventing breaches would significantly add to the challenges
of the system administrators.

Consideration must be given to the functionality of the data analysis and reporting
tools available. Custom development and maintenance of these tools is very costly.
Without creative solutions, these challenges lead to severe limitations and
ultimately abandonment of access to operational data – the exact opposite of
what the organization

The
Data Mart

Helpful in understanding the data mart solution is a comparison of operational
system data (see Figure 3) and business management environments (see Figure
4) to that of the data

Fundamental to the proper design of a data mart solution is a thorough understanding
of the requirement issues dictated by the specific implementation under consideration.

Requirements typically fall into one of three categories:

  • The business function and scope requirements definition asks questions about
    which specific business problems in what part of the enterprise are to be
    addressed;
  • The data requirements definition asks questions about characteristics of
    source data and the needs for its extraction, refinement and re-engineering;
    and
  • The access and usage requirements definition asks questions about who will
    use the solution, when will they use it, what will they use and how will they
    use it.

Business Function and Scope Requirements
These various functional areas that can benefit from a distribution operations
data mart have varying needs. These needs affect data mart design with respect
to dimensional analysis, granularity of information and temporal analysis.

Dimensional
analysis (see Figure 5) involves determining how to best examine or “slice and
dice” the information captured to best address the business problems at hand
in a particular area of the enterprise.

Dimensions relate to things such as time, space, frequency, etc. Figure 6 illustrates
some possible examples of time and space dimensions in the realm of electric
distribution operations. Other possible dimensions are: number of interruptions,
number of customers out, etc.

Multidimensional analysis (see Figure 7) entails “slicing and dicing” by a
combination of multiple dimensions. Some examples are: outage duration and reliability
index by outage cause; daily trouble by branch by region by company; reliability
indices by year by feeder by substation by branch; momentary events by feeder
by substation by branch, etc.

Granularity
analysis (see Figure 8) refers to determining how much detail is required in
the data to meet the business requirements of the data mart. Different business
functions typically require different levels of granularity with varying levels
of summarization and/or aggregation. The importance of each of these requirements
must be balanced against the cost of the IS necessary to provide them.

The granularity of the time unit against which data is required to be captured
and analyzed must be scrutinized (e.g., this year, last year, this quarter,
last quarter, today, etc.). Temporal distortions can occur due to the differences
in the rate of change of the various dimensions. For example, changes to normal
circuit topology and customer information typically occur at a slower rate than
changes in abnormal device states.

The data of interest from electric distribution operations ranges from the
transactional level (customer call) to the aggregate level (outages by district)
and from the historical (last year) to the current (now), each having different
impacts on the data mart design.

Access and Usage Requirements
Different users require varying levels of access to, and views of, the information
contained within the data mart. Management is typically interested in easy retrieval
of predefined information summaries at multiple levels to support decision making.
Other business users are typically interested in the added ability to “massage”
the information and vary the views, as well as access to detailed, achieved
data.

A powerful tool for the business user that is enabled by a properly designed
data mart is Online Analytical Processing (OLAP). This tool provides for multidimensional
analysis using drill-down and roll-up techniques as well as iterative analysis
of data by changing the order of dimensions.

Summary

A comprehensive analysis of the combined set of issues leads to the conclusion
that the application of data warehousing concepts is fundamental to the development
of a robust solution. These concepts include warehousing data offline from the
operational database, organizing data for efficient data analysis and reporting,
opening access to operational data on a broad scale at minimal cost and isolating
business users from the operational system. In addition, the popularity of data
warehouse solutions has spawned the development of powerful, readily available,
cost-effective data analysis and reporting tools.

 

Let”s Take A Closer Look at Generation Operational Excellence

Accelerating world demand for energy, coupled with recent natural disasters,
is driving and will continue to drive, gas, oil, coal and nuclear fuel prices
to unprecedented levels. Increasing global warming concerns are driving additional
emphasis on tightening emission limits, requiring utilities and merchant generation
companies to spend hundreds of millions of dollars per fossil-fueled plant to
either comply or shut down existing plants. Both ends of the generating process
are putting plant, division and corporate generation managers in an ever-tightening
squeeze to decrease costs in the operations and maintenance budgets.

This phenomenon is not new. For the last 10 to 15 years, energy and utility
companies have had to do more with less. The problem now is that the low-hanging
fruit has been harvested. Voluntary separation programs and other destaffing
exercises have been conducted, which have yielded the requisite operational
and maintenance budget decreases. However, this has been done without rethinking
the way work gets done or considering how to apply technology to assist in better
operational and maintenance decision making.

In other words, for the past decade or more, most of us in the energy and utilities
industry have talked about operational excellence, a.k.a. cutting costs, without
understanding what the term really means. In light of ever-increasing pressures
and the reality that we may have cut our organizations to the point that we
might have jeopardized the long-term integrity of the generation assets for
which we are stewards, perhaps it is time to really understand and implement
operational excellence techniques.

What Is Operational Excellence?

A Yahoo search on operational excellence yields more than 5 million hits, mostly
for products or services that can yield operational excellence for a company.
In searching for a definitive definition of operational excellence, the best
that I have found over the years comes from The Discipline of Market Leaders
by Michael Treacy and Fred Wiersema.[1] In Chapter 4, “The Discipline of Operational
Excellence,” the reader can start to draw some interesting insights into what
behaviors and traits a company that competes under this model should possess.[2]

The fundamental characteristic the authors cite is lowest cost, but that has
some additional caveats. What lowest cost means is that nobody else in the market
can sell the product or service for less than you can when all of the costs
to the consumer of owning or using your product or service are taken into account.[3]
Other characteristics include:

  • Highly regimented, proceduralized and rules-driven;
  • Aggressively pursue automation to minimize labor and lower variable costs;
  • Standardized assets and efficient operating procedures;
  • Rejection of variety;
  • Work to reshape customers’ expectations since you cannot be all things to
    all people;
  • Focus on activity-based costs and transaction profitability;
  • Everyone knows the plan, the rules of the game and exactly what they have
    to do and when;
  • Low overhead with efficient, re-engineered business processes; and
  • Passionate about measuring and monitoring to help ensure rigorous cost and
    quality control.[4]

Many of you reading this may be saying that this is fine for other companies,
but what does this have to do with companies that have large fleets of generation
assets? The answer is that everything these authors said about operationally
excellent companies in 1995 applies today, and it especially applies to generation-intensive
companies.

Applying Operational Excellence Principles

Before we start to apply operational excellence principles, let’s explore the
following background questions:

  • Do you make operational and maintenance decisions based on market-based
    margin conditions?
  • Do your information systems provide the control-room operators with a view
    of the profitability of each unit’s current and cumulative performance so
    you can measure against the business plan?
  • Have unit and portfolio optimization scenarios been developed that take
    into account market conditions and timing thereof, fuel blend, emissions,
    unit consequences, heat rate, EFOR (emergency forced outage rates), derates,
    superheat temperature, optimum operational frequencies – deslag, soot blow,
    air preheater washes – and crossorganizational process optimization between
    trading and operations?
  • Are you doing the right maintenance work for the right reasons, i.e., to
    what extent have you implemented reliability-centered maintenance and condition-based
    maintenance to one-time reduce the amount of maintenance workload?
  • Are you performing the remaining maintenance work as effectively and efficiently
    as you can, i.e., are you optimizing the performance of your personnel in
    getting work done?
  • Are you providing the correct metrics to manage the processes and achieve
    the operational excellence vision?

Reading through this list of questions, some of you may have answered “yes” or
“kind of/sort of,” and some of you may have felt that the questions did not apply
to you since you are a base-load or peaking unit. The fact is there are very few
companies that can answer yes to all of these questions. However, answering yes
to these questions is at the heart of becoming an operationally excellent generation
company.

What Is the Value of Applying Operational Excellence Principles?

The value of applying operational excellence principles is great. Some of the
benefits of doing so include the following:

  • Market-based margin decision making: The value of focusing on this
    area will vary from company to company and market situation to market situation.
    However, the key areas for improvement and range of savings include:

    • Outage in a box – Shorter-duration, more frequent outages based on market
      conditions could potentially reduce purchased power costs, reduce risk
      and reduce available production to match market conditions.
    • Derates to perform maintenance – Performing derate maintenance when
      market conditions allow will reduce purchased power costs and increase
      revenue.
  • Doing the right maintenance for the right reasons: Effectively implementing
    reliability and condition-based maintenance can onetime reduce the annual
    maintenance workload by 25 to 30 percent without increasing EFOR or unit availability/performance
    (heat rate).[5]
  • Performing maintenance as efficiently and effectively as possible:
    Typically this dimension looks at the direct activity of the maintenance workers.
    Informal benchmarking studies have shown that typical hands-on work direct-activity
    rates for non-nuclear generation facilities run 45 percent of every 8-hour
    day.[6] Stated differently, only 3.6 hours of every 8-hour day for every maintenance
    trades and labor person is spent on hands-on maintenance work. By focusing
    on delay codes and proactively working with unions, it is feasible to easily
    increase this to 50 percent with a stretch to 55 percent if you focus on the
    causes of delay in accomplishing maintenance work. Think about the potential
    impact: If you have a 1,000- person maintenance workforce, every 1 percent
    improvement in direct activity equals 17,800 hours or $1 million at $60 per
    hour fully burdened.[7]
  • Focus on the proper metrics: One of the metrics that an operationally
    excellent generation company should focus on is the percentage of maintenance
    that is planned versus the percentage that is emerging or corrective. Informal
    benchmarks have shown that the percentages for this typically run 60 to 65
    percent for planned maintenance and 35 to 40 percent for emerging or corrective
    maintenance. The target for sophisticated asset management companies is 85
    percent planned and <15 percent corrective or emerging.[8] Furthermore, numerous industry studies have shown that corrective maintenance is four to six times more expensive than planned maintenance.[9] Take a hypothetical generation company spending $750 million on maintenance annually and performing 65 percent planned and 35 percent corrective maintenance. Using a 2X multiplier, improving the planned to corrective maintenance percentages by 5, 10 and 20 percent could yield a $28 million, $55 million and $128 million reduction in maintenance expenses. Using a 4X multiplier, it could yield a $55 million, $119 million and $220 million reduction in maintenance expenses.[10]

The potential savings speak for themselves. Now let’s look at how to make this
happen.

How Does a Company Become Operationally Excellent?

There is no magic solution that you can find, buy and implement to become operationally
excellent. First, it takes a company commitment to designate someone to focus
on operational excellence. Second, the company needs to form a team whose sole
focus is to establish the operational excellence philosophies and key performance
metrics with which the business will be run in accordance. Third, the data and
information necessary to implement the operational excellence vision must be
established. Fourth, the systems that house this information and data must be
identified. Finally, the company’s information technology organization must
be challenged to integrate and provide all of the information the company needs
to be successful.

There is one global generation company that appears to have mastered operational
excellence. The former non-nuclear British Energy generation assets were privatized
during the British deregulation and privatization of the late 1990s. The company
that owns these assets is now RWE Innogy, and most of its success has been attributed
to: 1) a focus on market-based margin decision making, and 2) integrating plant,
transactional and market information to enable effective decision making.

According
to a presentation entitled “Integrating Information Technologies into the Enterprise”
by Robin Gomm of RWE Innogy Plc given at the Electric Power 2004 conference,
numerous benefits can be derived from exploiting IT. Some of these benefits
include cost management (fixed to variable), extension of plant life and improvement
in its performance and efficiency, optimized resource management, and reduction
of operational and commercial risk.[11]

To derive these benefits, previous islands of information must be effectively
integrated and made available to every aspect of the business in order to optimize
end-to-end performance as opposed to parts or portions of processes. For example,
Gomm’s presented paper mentioned the distribution of commercial, performance
and forecasting data. With this accomplished, information could be shared by
the trading department, with the production department regarding such aspects
as the definition of future commitments, enhanced trading activities to optimize
outage planning, and supporting an internal market. Information could be shared
by the production department, with the trading department regarding the definition
of operational and plant technical data in commercial terms, plant risk and
reliability issues, and opportunities for enhanced trading and outage optimization.[12]

In addition, according to Gomm’s paper, distributing commercial, performance
and forecasting data can help promote greater understanding by the trading department
of operational problems and activities, and greater understanding by the operations
department of the trading and asset management processes. Furthermore, data
shared by a company’s internal market can aid in the comprehensive evaluation
of risk in plant operation and in Business Risk Assessment.[13]

Figure
1 shows how integrating plant, market and transactional information might be
accomplished. As shown, the steps involve gathering the data, knowledge discovery
and decision trade-off.

To move toward generation operational excellence, companies also need to move
from a strictly cost focus to a business focus (see Figure 2). For example,
instead of just focusing on cutting costs, companies should focus on increasing
revenue and profits. Instead of just focusing on collecting business data, companies
should focus on creating enterprisewide knowledge. Instead of just focusing
on processing customer transactions, companies should focus on developing innovative
services for customers. This change of focus from cost to business will assist
generation companies in better operational and maintenance decision making and
can help companies become the lowest customerperceived cost provider in the
market.

Conclusion

Becoming an operationally excellent generation company is a transformational
journey. By implementing the steps properly, you can effectively change from
a past focus on cost to a future focus on the business, thereby moving along
the road toward generation operational excellence.

This article represents the views of the author, not the views of TVA.

Endnotes

  1. Treacy, Michael and Fred Wiersema. “The Discipline of Market Leaders.” Perseus
    Books, 1995.
  2. ibid, pg. 49.
  3. ibid.
  4. ibid., pp 51-58
  5. IBM Business Consulting Services analyses 1998–2005.
  6. IBM Business Consulting Services analyses and author audience surveys during
    speaking engagements 1998–2005.
  7. Hypothetical calculation performed by author for this article, 2005.
  8. http://www.maintenancebenchmarking.com/best_practice_maintenance. htm.
  9. IBM Business Consulting Services RCM analyses 1995–2005.
  10. Hypothetical calculation performed by author for this article, 2005.
  11. From “Integrating Information Technologies into the Enterprise,” presentation
    by Robin Gomm of RWE Innogy Plc at the Electric Power 2004 conference held
    March 30–April 1, 2004 in Baltimore, MD, proceedings disc.
  12. ibid.
  13. ibid.

 

How to Optimize Reliability in Utility Networks

While reliability is perhaps one of the most watched performance indicators
in the utilities industry today, improvement efforts are often based on best
practices or trial and error. As a result, mountains of data are available to
monitor utility network reliability. Now analytical tools are needed to refine
this data into information that can be used to optimize restoration efforts
and simulate asset management strategies designed to ultimately improve service
reliability. These new tools will be used to analyze and assess trade-offs between
cost and reliability, providing utility network operators or owners with the
means to simulate restoration and asset management strategies and continuously
optimize tactics. First let’s review how the various approaches to managing
reliability of utility networks have evolved.

Evolution of Network Operations

Over time, “best practices” in the industry have evolved, taking advantage
of new technologies and knowledge to deliver targeted service reliability. This
evolution will continue, and progressive utilities will lead the industry in
pioneering new tools and techniques to continually enhance reliability.

The
Past

In the past, service reliability was supported by silo applications, such as
work and asset management systems (see Figure 1). These systems optimized tasks/activities
based on the data within their systems with manual interfaces to coordinate
activities outside of the specific area. For example, work management systems
would produce work schedules for each field service crew with some automated
interface from the customer information system. However, these schedules would
be manually faxed or otherwise distributed to the crews. The results of the
work would be subsequently recorded on multipart work orders by the crews and
manually entered back into the work management system. This disjointed manual
process prevented the ability to “see across” all activities to understand how
work could be optimized on a given day across the entire field service organization.

Recently
Recently, tools to support utility network reliability focused
on mobile workforce tools to improve the field-force work effectiveness by eliminating
some of the manual activities. Risks were assessed primarily based on known
asset attributes (e.g., age, cost). Sophistication grew in regard to maintenance
planning, establishing reliability and condition-based maintenance and replacement.
However, the basis for decision making – related to sustaining or improving
reliability – was still intuitive and based on limited scenarios. The relation
between action and result was determined through subjective processes.

Today
The industry is now leveraging more knowledge and sophisticated
technologies to bring about more seamless – and integrated – decisions and actions
to support reliability. These integrated systems can improve work delivery,
but, interestingly, some of these systems are disabled during unplanned outages
in lieu of “storm teams.” The reason: They are not responsive enough, so utilities
resort to manual decision making to speed service restoration. In addition to
the integrated work management processes, asset management processes have also
evolved to optimize return on assets. These asset management processes and tools
are driven by increasingly sophisticated risk- and condition-based scenario
analysis. A key tenet of these analyses is that optimized return on assets will
also yield acceptable overall network reliability performance. These asset management
processes can be conducted more frequently than in the past due to improved
technology processing speeds, which produces large volumes of new data.

Next Steps
As the mountains of data grow, so do the possibilities. Therefore,
complex analytical capabilities are needed to refine the available data into
information that can be used to optimize restoration efforts – in real time
and continuously. This same data can be used to simulate reliabilitybased asset
management strategies – on a continual basis – that balance reliability service
levels with return on assets, taking more of the guesswork out of decision making.

The Future
With the “brain” in place to conduct complex analyses continuously
using more current data, additional technologies can be leveraged to automate
network operations to an even greater degree. The possibilities include:

  • Remote asset monitoring and control – sensor technology;
  • Large volumes of asset operational information;
  • Consistent use of asset information for risk management;
  • Pre-emptive action in advance of faults; and
  • Dynamic asset reconfiguration.

Two Pillars of Optimization

There
are two major aspects to consider in optimizing reliability: restoration and
asset management. Both are key and often are considered independently.

Restoration Management
Strategies and tools are needed to restore service as quickly as possible. Outage
management begins well before the outage (see Figure 2). Outage planning and
staging are needed to help ensure rapid deployment.

Once service is lost, early, accurate and continuous assessment is key. Many
utilities have assigned a statistically representative portion of assets within
each region to allow for a quick assessment and interpolation of damage. This
initiates predefined plans for deployment of resources and equipment. Mobile
dispatch systems notify crews and truck rolls. Advanced systems do not stop
there; as new information emerges and restoration tasks are completed, crews
are re-dispatched and resource deployment is optimized. Consideration of number
of customers versus duration of outage is traded off. Advanced utilities are
considering tools to support continuous assessment during an outage to balance
costs with restoration speed. As work is completed, the work orders are closed.
Once restoration is complete, the process starts again. Through post-mortem
analysis, outage performance is critiqued and outage plans are refined as needed.
Forecast and assessment tools are also revisited and adjusted.

Asset Management
There are two primary strategies for managing the avoidance
of interruptions.

  • The first is a design strategy. By sectionalizing the network (e.g., using
    more fuses, reclosers and sectionalizers), the number of customers affected
    by a given asset can be contained. This often entails expensive modifications
    that must be spread over an extensive period (e.g., three to five years).
    To optimize the return on these investments, operators must have a clear understanding
    of their impact (the specifics, not just generalities) and prioritize the
    investments according to reliability return.
  • The second strategy is maintenance-related. Sophisticated operators employ
    risk assessment mechanisms to identify and target maintenance programs on
    high-risk assets. This strategy does not require large investments but, instead,
    necessitates the collection and analysis of significant amounts of data related
    to asset condition and reliability. Again, a long lead time is required.

These
strategies are often applied in a suboptimal manner; that is, the operator does
not have the necessary information to prioritize maintenance and design strategies.
Often priorities are based on industry “best practices.” Although a valid source
of ideas, specifics associated with individual network configuration, condition
and demands prevent equal transfer of concepts.

Optimizing Reliability and Investments

Analytical models can be developed to simulate and test strategies (see Figure
3). By establishing the mathematical relationship between individual reliability
and restoration strategies and their potential impact on specific networks,
operators can better understand and prioritize investments. To take it a step
further, these algorithms can be related to assess trade-offs between restoration
and reliability investments.

The Approach to Optimization

System analysis and development of tools to optimize reliability is a significant
undertaking. First of all, there are many aspects of each network, which make
it unique and limit the transferability of analytical models. The configuration
of the network, the condition of the assets, the structure of the organization
and the availability of data all factor into program development. A logical
set of comprehensive steps is required.

Innovation Review
The starting place is where you are. What reliability
improvement initiatives are currently under way? How can they be enhanced? The
objective of this activity is to understand the current improvement initiatives,
planned and under way, and to identify possible enhancements by leveraging supporting
tools (e.g., weather forecasting, work management, dispatch and communications
technologies), analytical models, industry best practices, business case analyses,
and expertise internal and external to the organization. A root cause analysis
of recent interruptions will provide insights regarding cause and effect. Best
practices studies will provide ideas as to how others have addressed these causes.
Contrasting this insight with improvement initiatives under way will help identify
potential enhancements to the initiative. The goal is to improve the impact
of ongoing initiatives and develop more granular knowledge regarding the relationship
between causes and improvement initiatives.

Developing the Reliability Analysis System
There are five elements in
the development of analytical tools to optimize asset management and restoration
management strategies:

  • Develop and optimize requirements – Begin by interviewing key operations
    staff and other experts to gather information relevant to the specific areas
    of optimized scheduling and dispatch and asset management. The tasks will
    be: (1) gather business knowledge, (2) determine data availability and data
    format, (3) establish an appropriate linear objective function to be used
    in the optimization, and (4) define hard and soft constraints that need to
    be enforced on each aspect. There may be other issues that need to be discussed
    as a result of interviews and meetings with the experts. These issues will
    be addressed in the design detailing the high-level requirements for the overall
    route optimization solution.
  • Analyze data related to improved reliability – Review data sources
    that may contain data – current and historical – that could be used to build
    analytical models. Utilize expertise in: data mining and modeling; an understanding
    of the dynamics of risk, reliability and restoration; and the availability
    of data to develop some potential applications of analytics to improve the
    utility’s ability to predict reliability.
  • Develop a restoration model – Define optimized scheduling and dispatch
    problem in mathematical terms. It is assumed that the objective function and
    the constraints are linear. Model the problem as a mixed-integer program or
    as a linear program. The decision will be made after conducting preliminary
    analysis on available data. Conduct a similar analysis for probability of
    failure and impact of failure. Develop draft mathematical model formulation
    for optimized scheduling and dispatch. This mathematical model will have the
    capability to serve as a basis for a prototype.
  • Develop an asset management model – Apply asset management concepts
    (e.g., reliability-centered maintenance and commercial-based maintenance,
    risk/reward capital allocation, etc.) in developing optimized investment and
    maintenance in mathematical terms. The approach is similar to the restoration
    model described above. The new dimension will be identifying interrelationships
    between the asset management model and the restoration model.
  • Analyze the potential for integration of other tools into the Reliability
    Analysis System
    – The objective of this activity is to analyze the environment
    to determine the best way to customize the products and usage; to evaluate
    dependency and integration issues for mathematical optimization component(s)
    for the decision support system; and to fine-tune modeling.

Identify
and Prioritize Innovation Solutions

With the insights gained from the optimization tool, the next step is to identify
and prioritize potential solutions to enable the significant reduction of outages
and outage durations (See Figure 4).

  • Apply insights from reliability analysis system development to enhanced
    initiatives;
  • Rank targeted solution areas to address reliability performance according
    to expected “reliability return on investment”;
  • Determine which combination of initiatives will potentially achieve reliability
    goals;
  • Conduct risk-assessment working session to determine the financial, operational
    and technical risks of each key solution area;
  • Prioritize and outline solution areas and options; and
  • Develop an initial draft road map.

This cycle will repeat continuously. As the model is used, knowledge will expand.

Conclusion

Real-time analysis capabilities will provide a new paradigm for network operations.
New tools based on mathematical models will focus decisions on restoration and
reliability instead of on organizational silos and budgets. Coupled with new
business processes, these new tools will allow utilities to simulate investments
to optimize the balance between reliability and return on investment. Operators
will be able to “fly by wire” in emergency and restoration activities. And finally,
the mountains of data will begin to move.

 

Asset Management – Do You Know Your Risk?

Asset management is the focus for most utilities, primarily because of cost
pressures resulting from the limited availability of capital and O&M funds,
and from customer/regulatory pressure to improve physical network reliability
and operating efficiency. A recent META Group study, “Promising Initiatives
in Distribution Asset Management,” found that utilities are divesting themselves
of noncore utility businesses, resulting in a “back to basics” strategy where
the only choice is to reduce costs. Utilities struggle with how to practically
accomplish this, often resulting in the implementation of autonomous business
processes and technology improvements. As asset management encompasses many
of the utility’s key business processes and extends across the organization,
it becomes obvious that technology solutions are a necessary enabling tool and
that associated changes to the business processes must be fully embraced by
the workforce and the benefits expected to be realized must be consistent with
the utility’s risk profile. Although obvious, but much less practiced, the utility’s
propensity to assume/accept risk is directly related to the benefits they can
expect to achieve. The difficulty in doing this has been the lack of a practical
risk measurement that accounts for the major elements of the utility’s risk
that must be managed.

The business drivers for focusing on asset management provide excellent indicators
of the types of projects that must be undertaken for which the risk potential
impact/degree of influence on expected return must be evaluated. For example,
a utility may make a large capital investment in network assets (assess capital
risk) to improve reliability in response to customer satisfaction concerns (company’s
perception/ reputation risk). A Primen-Electric utility study showed that “by
investing an average of $1.64 per customer in service delivery a utility company
can achieve an 8 percent increase in customer satisfaction, while investing
$180 per customer in improving distribution infrastructure only improves customer
satisfaction by 5 percent.” Clearly, there is a need for a comprehensive risk/return
model that can be used to evaluate the various utility investments (not just
capital, and not just physical network improvements) that returns/maintains
an acceptable level of balanced, aggregated risk.

The following sections provide a high-level overview of asset management business
drivers to profile projects/risks, and the use of an efficient frontier analysis
to measure and manage portfolio return/risk.

Use of Asset Management Business Drivers to Profile Risk

Utilities,
particularly the distribution business, are faced with a number of critical
business drivers, from regulatory compliance (e.g., financial/governance – Sarbanes-
Oxley), security and operational (e.g., system reliability, pipeline safety)
to operational efficiency requirements. Combined with an aging workforce and
aging assets, utilities have realized the need to refocus. As a result, utilities
are moving to a “back to basics” or “core business” strategy that requires a
shift in the areas and amount of emphasis placed on each risk element.

Figure 1 summarizes the key asset management business drivers. Utilities have
long recognized that they make money based on a return against the asset base,
but the noncore investments made over the last 10 years have done little to
contribute to generating returns against this asset base or improve revenue/profit,
and in many cases, have resulted in compromising the company’s creditworthiness.
This, combined with limited rate case opportunities, has constrained the utility’s
access to capital.

With the “back to basics” strategy and constraints to the access to capital,
the utility must reduce costs. While internally, the utility employees’ focus
is on reducing cost, externally this must be transitioned to and presented as
improving shareholder value. Thus, all projects to be considered within the
portfolio should address how shareholder and customer value is improved. META
Group, Inc. (now a part of the Gartner Group) has presented this very succinctly
in an article, “Promising Initiatives in Distribution Asset Management.”

As
can be seen from Figure 2, shareholder value, the difference between allowed
returns and operating costs, has continued to erode. Allowed returns in the
European and Australian markets have been largely limited as a result of price
cap regulation, while U.S. markets have to face limited organic growth (resulting
in more M&A activity). The utility’s only choice is to reduce costs – an effort
they’ve taken on a number of occasions (represented by a series of reductions
in cost over time) that has only resulted in “sweating” the assets in terms
of addressing the “low-hanging fruit.” Any further cost-out efforts will not
be easy.

While operational efficiency and regulatory compliance currently dominate the
scene, other factors such as an aging workforce, aging assets, etc., must be
appropriately considered to maintain an appropriate level of risk while making
further cost reductions. As can be seen from Figure 3, a larger capital investment
in assets is likely necessary to achieve an acceptable risk profile as it relates
to the overall age of the asset base. Similarly, as the average age of the workforce
is now approximately 47 years, with retirement typically at age 55 years, capturing
the knowledge base through technology-enabled business processes, again requires
some level of capital investment commensurate with the acceptable level of risk.

The
question becomes one of finding a manageable and practical way of considering
all of these investments and achieving the expected return without creating
an undue imbalance in the risk profile that’s acceptable to the utility. A method
commonly used by companies as well as individuals in making financial instrument
and project investment decisions is efficient frontier analysis. This same analysis
can be applied here.

Application of Efficient Frontier Analysis

As a utility operates as a business, it must provide both shareholder and customer
value (see again Figure 2). The utility must deliver an appropriate level of
return while maintaining an acceptable level of risk. Everyone knows (or should
know) that there is a direct relationship between return and risk – the larger
the return, the higher the risk. Few companies, however, truly understand whether
their assets reflect appropriate returns for the risks they face. Constrained
by finite budgets, staff and other resources, companies are continually faced
with the issue of deciding where to invest to deliver the most value to the
business. With millions of dollars and hundreds, if not thousands of “project”
investments at companies each year, it makes sense to treat these investment
decisions in a manner similar to how a fund manager determines a portfolio of
stocks.

The concept of projects, here, must be expanded beyond the traditional capital
investments in physical assets, to encompass “projects” focused on improving
the company’s perception, retaining the knowledge base of an aging workforce,
etc. From the Primen-Electric utility study referenced earlier, it showed that
“by investing an average of $1.64 per customer in service delivery, a utility
company can achieve an 8 percent increase in customer satisfaction, while investing
$180 per customer in improving distribution infrastructure only improves customer
satisfaction by 5 percent.” This indicates that customers are willing to accept
lower reliability, if they have a better means of communicating or better experiences
with the utility. The capital investment swing in this example exceeds tenfold.

Figure
4 is an example of the efficient frontier approach. It is a portfolio analysis
concept that:

  • Evaluates risk versus return for portfolios that can be comprised of a given
    group of investments;
  • Identifies the single highest level of expected return for a given portfolio
    risk profile; and
  • Requires information about individual investment opportunities, including
    the expected return (“return”) and the standard deviation of return (“risk”).

The efficient frontier (curved line) represents the maximum return one can
expect to achieve given all combinations of investments that are constrained
by the utility’s finite budgets, staff and other resources. The portfolio projects
represented here are those that have been identified through the evaluation
of the asset management business drivers, and could include projects such as
capital asset construction, security upgrades, technology solutions to address
the aging workforce, increased customer satisfaction through improved communication
means (e.g., customer self-service), selling-off noncore businesses, improving
cash flow, etc., that need to be considered in efficiently managing assets while
generating the highest return with acceptable risk. To determine the efficient
frontier:

  • The expected value (return) of each project under consideration is estimated;
  • The variance in the potential values (returns) of each project is estimated
    as a measurement of risk;
  • The correlation between each project and every other project is estimated;
    and
  • Constraints (e.g., budget, staff, other resources) that limit which portfolios
    (combination of projects) are acceptable are expressed.

At every level of risk, the portfolio that generates the highest return determines
the efficient frontier curve. A number of commercially available software solutions
will support the required data development and analysis.

Portfolios along the curve are said to be efficient because the utility is
getting maximum value from the available budget, staff and other resources.
Points under the curve are inefficient because they either represent less-than-an-optimal
return or higher-than-acceptable risk. Many factors can result in a portfolio
being under the curve, including taking on too many low-value projects or a
mismatch between the supply and demand of technical skills/competencies, leaving
a portfolio that yields less than it could have from the total available resources.
Thus, any position that moves the portfolio’s position away from the efficient
frontier should be challenged.

Another important outcome in using the efficient frontier modeling is opportunity
cost. The efficient frontier shows the opportunity cost of investing a unit
of additional resources versus the additional value (return) received. As the
utility discovers their most valuable projects, those with the highest value-to-cost
ratios, the efficient frontier is very steep. As fewer valuable projects remain,
the curve flattens out. This is essentially the 80/20 rule, where 80 percent
of the value is achieved from 20 percent of the investment/effort. For utilities,
or any company that normally experiences additional budget cuts or other resource
constraints during the year, the efficient frontier provides a means to identify
those projects that should be placed on hold or cancelled.

Conclusion

Asset management business drivers should be used to drive the portfolio of
projects to be considered for implementation by the utility. Efficient-frontier
analysis provides the means to determine a portfolio of projects that can achieve
the maximum return within the utility’s propensity to assume the associated
risk. Any decisions made by the utility that result in a portfolio that falls
below the efficient frontier must be challenged, as the portfolio is yielding
less than it should based on the level of investment made. The efficient frontier
method will provide the utility with better return and risk information upon
which investment decisions can be based and periodically evaluated as conditions
change. Furthermore, it encourages more personnel within the utility to make
economically based decisions, including the consideration of shareholder and
customer value.

 

Informed Decision Making

Corporate management is increasingly called upon to make complex critical decisions
in short time frames. Consolidations within industries, competition to maintain
competitive advantage and pressure to maximize return on investment are key
business drivers. Market leaders have learned that success in this environment
requires comprehensive, accurate and timely data and the tools to effectively
utilize them.

Recent technological advances in service-oriented architecture (SOA), business
and operational applications, digitization of field devices, the networks that
link them and the tools to deploy them, along with the associated improvements
in hardware price performance, open standards and security, have collectively
provided a platform for information process transformation across industries
and markets. This opportunity for transformation is unfolding industry by industry
and market by market based on economics, competition, governmental policy and
business opportunity.

A Changing Approach to Change

The energy and utility industries have heretofore been slow to take advantage
of the transformational opportunities presented by technological innovation.
The conservative culture of these two industries, their social importance to
the communities they serve and their designation as critical infrastructure
providers have resulted in their experiencing slower evolutionary change versus
embracing more rapid transformational change. Currently this approach is changing
as a result of multiple business drivers and opportunities, some of which include:

  • There are aging transmission and distribution infrastructures domestically
    and emerging markets globally (e.g., India, China). Significant investments
    are anticipated over the next several years to replace aging infrastructure
    and sustain economic growth in developing countries.
  • Environmental policies, such as the Kyoto Accord, are driving the need to
    better manage demand and use of current energy assets to avoid the construction
    of new infrastructure and minimize the impact of retrofitting facilities that
    do not meet emerging environmental standards.
  • An aging workforce is causing employee turnover and an associated loss of
    corporate knowledge.
  • There is a heightened focus on the efficient use of capital and maintenance
    budgets by investors, markets and regulators.

Transformation professionals synthesized the information process needs associated
with these drivers impacting the energy and utility industries and developed
a comprehensive infrastructure business model for increased energy and utility
value to enable a concept called “informed decision making.” For energy and
utility companies, informed decision making is about using the data that is
available from field devices, operational systems, business applications and
network assets to improve the timeliness and accuracy of decision making. Informed
decision making is made possible through the combination of an intelligent network
that allows real-time connectivity across the enterprise, an SOA that allows
integration of multiple business and operational applications and advanced analytics
for business intelligence mining.

Value of Informed Decision Making

As an example of the value of informed decision making, the CIO of a large
investor-owned utility that is aggressive in the acquisition of generation assets
wanted to transform the IT organization from a support center to a business
enabler. He felt that informed decision making was able to provide the basis
for this transformation based on the following considerations.

First, in terms of overall IT support to the enterprise, for IT to become a
business enabler in this case, the department needs to rapidly integrate the
business intelligence resident on legacy applications in the targeted acquisitions
into the corporate platform, enabling IT to provide a vital service in delivering
better data to key decision makers in multiple departments making acquisition
decisions based on a wide variety of business metrics. These decision makers
cannot be required to learn the intricacies of multiple legacy systems; the
data in these legacy systems must be accessible through the business tools and
models that the decision makers already know.

Second, the IT platforms and business applications resident at the acquired
plants have historically been treated as new stand-alone IT assets, to which
no synergy savings can be applied. This is because historically it has been
very difficult and costly to integrate such legacy assets with the corporate
systems. Determining the extent of such savings is essential to both making
an acquisition business decision and achieving the savings after the acquisition.

IBM’s SOA is the key to providing this capability. The SOA enables services
to be created for business functions, such as work orders and purchase requests,
and these services can be combined with other services regardless of the native
applications within which they are created. Imagine one plant using a commercial
software application for purchasing and another plant using a homegrown system,
while the corporate systems are using SAP software. SOA allows purchase requests
from both plants to be aggregated with those created in the corporate system
to create one purchase order, helping to ensure that the corporation’s strategic
sourcing initiatives are able to be realized across the entire enterprise, not
just in the integrated plants using SAP.

Now consider all the operational systems and the value of being able to rapidly
integrate a newly acquired plant into the corporate network. Consider supervisory
control and data acquisition (SCADA) data and corporate systems data combined
on a single IP-enabled intelligent network using quality of service to determine
priority of data packets on the corporate bandwidth. If this intelligent network
is coupled with SOA, then advanced analytics for business intelligence are overlaid
on top, the goal of having an IT organization that functions as a business transformation
center can be accommodated. This is the approach presented to and embraced by
this utility’s CIO.

The informed decision-making infrastructure model (see Figure 1) depicts the
framework needed to support the integration of data from field devices, business
and operation systems.

Implementing an Intelligent Network

The foundation for informed decision making is implementation of an intelligent
network within the enterprise. The key element of an intelligent network is
the transition to a common IP-based environment. In the common environment,
each element of the network is assessed and optimized based on the best low-cost
IP service available, including internally provided or externally provided through
a service provider. Each element of the infrastructure is migrated toward network
intelligence based on life cycle or capability enhancement requirements. As
this process unfolds, the informed decision-making infrastructure emerges, providing
a wide range of decision-making tools. Data flows from the data source to the
decision maker and back are created. The capabilities include features such
as end-to-end security, resource management, data mining, common business views
and adaptive computing. All of these capabilities are valuable within the current
corporate boundaries and also provide positive business-case value in support
of mergers and acquisitions.

The intelligent network framework provides operational value by creating an
“aware environment” through implementation of a single corporate IP-enabled
network. Additionally, the network is used in conjunction with deployment of
operational applications such as advanced meter management, mobile workforce
management, asset life cycle management, remote asset monitoring/control and
next-generation information brokering. Together, these components provide the
foundation for advanced network analytics, enabling the transformation of data
into insight, the key for informed decision making.

The
intelligent network provides a “sense and respond” framework that will provide
the enterprise with real-time connectivity across the entire utility value chain,
from data producer (e.g., flow computer, PLC, RTU, ESS) to data consumer (e.g.,
SCADA host, automatic meter reading system, inventory management system, modeling/leak
detection). Figure 2 shows the intelligent network utility value chain.

The favored architectural approach decouples the data producers from the data
consumers. This key element insulates applications from the complexities of
the control systems which formerly constrained them. Equally important, this
independence provides energy and utility companies with business flexibility
and agility for integrating or upgrading new operational environments. In other
words, the intelligent network offers a “plug and play” business environment.
This plug-and-play environment allows a decision maker to optimize decisions
across the entire enterprise of today and tomorrow, unconstrained by organizational
boundaries. This is the cornerstone of informed decision making.

In the energy and utility industries, the device, network and data domains
reach throughout the many networks currently in place. These networks include
corporate enterprise WAN and LAN, land mobile radio, public cellular, SCADA
and microwave. When viewed in historical context, these networks enabled discrete
capabilities prior to the existence of many common carrier services. Within
the operating environments of the utilities, land mobile radio predates cellular
services, and microwave existed prior to T-1, frame relay and Internet service
provider services. When the current utility networks are viewed holistically,
in present-day terms with present-day capabilities, significant opportunities
for reduced costs and enhanced capabilities emerge.

Elements of an Intelligent Network

While there are many elements of an intelligent network, the following four
elements define the core design:

  • Internet Protocol (IP) Communications. The intelligent network will
    help make the conversion from analog to digital while providing greater quality
    and access to operating information. By enabling all devices on the network
    through IP communication, companies will be able to grow their network quickly
    and use technology innovations. Examples of these technologies include wireless,
    broadband over power line (BPL) and Voice over Internet Protocol (VoIP).
  • Open Standards-based, Consistent Architecture. The IT environment
    can grow increasingly complicated in energy and utility companies. As new
    projects are deployed, there are increased difficulties in having redundant
    systems or communication across systems. A common architecture is needed that
    incorporates industry-standard data models, open technology communications
    and adoption by business partners. Having a consistent model enables manageable
    growth while potentially reducing costs and overhead. In essence, lower total
    cost of ownership is available through open technologies.
  • Consolidation through Public and Private Networks. Most energy and
    utility companies have elements of a converged communications and collaboration
    environment already in place. These elements may be a basic intranet portal
    or simply IP-enabled telephone systems. However limited such capabilities
    may be, they still are the foundational building blocks for deploying an integrated
    architecture that combines the disparate technologies to transform the business.
    To realize the promise of convergence, companies need a clear strategy to
    transform their organization from a “disconnected business” to one that has
    integrated key processes through collaborative portals. For example, the voice
    revolution started with the invention of the telephone, which allowed people
    to collaborate one-to-one in real time. The Internet provided a many-to-many
    communication network and enabled organizations to engage in truly collaborative
    processes on a global basis for the first time. Today, the convergence of
    voice, video and data is set to forever transform business relationships and
    collaborative strategies; the network of today and tomorrow needs to apply
    these technology innovations.
  • Security Based on Data/Applications. Regardless of the current or
    future infrastructure, information access and security are critical to energy
    and utility companies. Without security technologies, companies can be targeted
    for security attacks, information theft and massive system failure. The network
    needs to leverage the latest technology advances using the three A’s: authentication,
    authorization and administration. This comprehensive diligence and focus on
    security is a key to business resilience of critical infrastructure.

With these elements in place, tools can be used to mine data for business intelligence
to create insights, thereby increasing the value of the data to the enterprise.

Applying Business Analytics

Leveraging the resources created by the four elements above, “business analytics”
are software tools that transform raw data into meaningful business intelligence.
They do this by applying business rules and processes to raw data, by supplying
context through integration of information from multiple relevant sources and
by presenting the information in a readily comprehended format, such as a dashboard.
Data is generated continually within energy and utility companies; for example,
SCADA data, meter data, service report data, financial data and outage data,
among others. Analytics process this data to assist business managers in making
more informed decisions. Analytics can be used to automate many aspects of utility
operation, such as work management, load optimization and inventory management.
Analytics are also valuable tools to analyze processes such as capital expense
planning, service quality assessment and rate-case development and presentation.

Analytics are a key component of comprehensive utility information architecture.
Energy and utility companies have traditionally been capital- and labor-intensive
operations, but as utility markets become more competitive and utilities become
more automated, it becomes more crucial for utility business managers and executives
to have the information that allows them to connect decisions with true business
drivers.

The business analytics component of informed decision making is used to refine
raw data into information that creates business intelligence. As energy and
utility companies add more intelligent devices and networking to their operations,
the volume of available data will increase to the point of being overwhelming.
Analytics are the tools that make sense of the volume of data and enable the
workforce to focus on the highest-value decisions while automating the remainder.
Analytics also provide a framework to manage the gradual increases in network
awareness. For example, an energy or utility company might start out integrating
SCADA data with financial data through analytics and gradually add automatic
meter-reading data, remote grid or network sensing or data feeds from other
connected enterprises. As each new level of awareness comes online, the analytics
can integrate the new data smoothly into the existing business intelligence
framework.

The informed decision-making enterprise business platform is established through
building business intelligence on top of existing business systems and is typically
built in a phased approach, which begins with the implementation of analytics.
Analytics alone can offer increased value by providing insight into existing
data; however, the business value increases proportionally to the available
level of grid observability. The further down the wire toward the customer you
can monitor, the greater value you can realize.

Conclusion

IP-enabled networks, SOA-migrated application portfolios and analytics for
business intelligence are each strong tools used by many energy and utility
companies globally. However, when these tools are configured in an integrated
approach, they can exponentially deliver greater value. Clearly, informed decision
making has the potential to be transformational in these industries, delivering
extended benefit and competitive advantage to clients.

 

Addressing the Aging Utility Workforce Challenge: ACT NOW

For several years, industry writers have been warning of the "impending aging workforce crisis." Demographic data has been analyzed and presented, and numerous conferences have been held to discuss the issue. However, utilities can no longer view this as a theoretical or future risk —  it is already critically impacting operations.[1]

With over half of utility employees aged 45 or older,[2] significant costs are being incurred already at utilities that are directly attributable to insufficient preparation for losses of key skills to knowledge attrition and the workforce changeover. Take, for example, the following scenarios:

A snowstorm begins while a utility is restoring power after an outage. For this reason, tire chains are required for crews working the next shift. The shift manager notices that dispatch is moving slowly during the first hour. When the manager asks the service center why the delays are occurring, the response is: "Well, the person who usually does these preparations retired back in August. Now no one can find the chains for the trucks, so the next shift has to wait around."

A work supervisor at a nuclear generating station admits that it now takes him much longer to solve nonroutine problems that he encounters. He explains: "I used to have a network of people at other similar plants in which someone would have come across a similar problem and knew how to solve it. They’ve retired, and I don’t know and trust the new folks the way I did the old ones."

The utility industry must acknowledge that the awareness-building stage is over — the time to act on the problem is at hand. Key skill shortages are already emerging. For the balance of the decade, aging is predicted to drive major shortages of qualified workers.[3] Attrition of high-value/mission-critical employees is increasing, putting critical organizational know-how at risk.

Workforce Challenges

Utility knowledge is complex, and someone can’t be hired off the street to perform switching or to monitor transmission grid operations. Baby boomer retirements are having, and will likely continue to have, major operational and business continuity implications. Continued losses of technical knowledge and specialized skill sets are expected to occur in union/craft workers, nonunion professionals and management positions. This situation creates numerous serious challenges:

  •  There are not enough younger employees being recruited to replace the baby boomers that are approaching retirement;
  •  Continuous loss of experienced employees will affect productivity, responsiveness, competitiveness, regulatory compliance and morale;
  •  Many managerial and labor positions are highly specialized and require extensive and costly training; and
  •  Evaluation of initiatives for extended employment, mentoring programs or contracting retired employees to work part time requires advance planning and changes in personnel policies.

While it is natural to focus on the issue of losing experienced talent and the challenges associated with replacing that talent, utilities must keep reliability, safety and security linked to aging workforce planning. The North American Electric Reliability Council’s (NERC’s) Final Report on the August 14, 2003 blackout in the northeastern United States and Homeland Security initiatives make it clear that maintaining industry skill sets is critical. But doing so won’t be easy, and it will be costly.[4]

An example we are already seeing is the imbalance in sheer numbers between exiting workers and incoming prospects, which is already causing intense competition among utilities (not to mention between utilities and other industrial sectors) for workers. The annual Lineman’s Rodeo, for example, has become a virtual "scouting combine" for line workers, and participation in activities sponsored by industry organizations is seen by some as an opportunity to seek out high-performing specialists and aggressively recruit them.[5] While this means of acquiring talent has not erupted into open warfare among utilities yet, as the scarcity of trained, experienced talent grows over time, the danger of escalating labor costs will grow — and the inter-utility cooperation we have seen in recent years as so critical to recovering from natural disasters may be threatened.

Looking at data that the U.S. Department of Labor maintains regarding where utilities will face the greatest hiring needs over the next six years, we think that several layers of the utility workforce will be affected (see Figure 1). Therefore, utility executives must ask critical questions now, before it is too late to plan effectively. What level of experience will be available for the next emergency? Are transmission operators, engineers, line personnel, field supervisors or other key disciplines exiting the workforce faster than they can be replaced? What are the workforce and skill deficiencies in place today costing companies, and how will this grow over time?

Why This Is an Issue Today

The costs of an aging workforce that impact utilities today can be placed into three categories: operational costs, productivity costs and opportunity costs. Taken together, these costs can run into millions of dollars per year, and, unless effectively managed, can grow quickly. Utilities not aware of these impacts and not currently tracking these costs should begin as soon as possible to gauge workforce knowledge attrition issues in their organizations today. Understanding these impacts is an important step in targeting the areas to address with mitigation strategies.

Operational Costs
The examples above fall into the category of operational costs of an aging workforce. These costs are the ones that directly impact the bottom line through a wide variety of forces to which aging workforce impacts can contribute, including:

  •  Lost revenue due to extended outages;
  •  Penalties from regulatory agencies, higher maintenance costs; and
  •  Increased frequencies of forced outages and accidents caused by human error as highly experienced operators retire, since human error rates would be expected to significantly decrease with experience.

Many of these forces, particularly those relating to operations and maintenance (O&M) costs in generation and transmission, can be tracked with data already being collected and reported to meet regulatory requirements (e.g., service reliability reports, FERC Form 1 or 2 data). As new systems needn’t be created to collect data, "smart metrics" that capture knowledge attrition impacts should be identified and trended to note emerging problem areas.

Productivity Costs
Productivity costs are being seen over a wide variety of areas, including:

  •  Increases in the duration of planned outages are expected as the new hires gradually build the expertise and efficiency in their jobs that the current workforce has; and
  •  Work time lost in recovery from injuries is expected to be much greater for older workers; between ages 19 to 29, the number of average days lost is 10.4, but the number of average days lost for those 50 to 59 is 47.5.[6]

Among OSHA, state workers’ compensation systems and insurance companies, most utility companies are required to maintain information on worker injury frequency and days lost from work. Capturing worker ages and attributing costs specifically incurred for an injury to the event will provide insightful information.

Opportunity Costs
Lost or delayed opportunities to take costs out of the business are the opportunity costs of the aging workforce, including:

  •  Executing performance programs ends up costing more than budgeted and extends planned ROI time frames;
  •  Internal resources may be limited and prevent the performance program from ever beginning.

It is difficult to create metrics for this and trend it, but it is possible to estimate the costs of ignoring such programs.

More Than a Human Resources Issue

These impacts on work processes and organization structures,and, consequentially, the financial health of the company, should make it clear this is not the type of traditional hiring problem with which human resources departments have traditionally dealt. A strong, coherent strategic response from the top of the organization will be required. Serious discussion, commitment and action by a cross-functional section of workforce and union leadership, led by utility executives, will be required.

Some of the key strategic and financial issues to consider in these discussions include:

  •  The need to look at hiring as an investment, not a cost, and treat human asset management the same as physical asset management;
  •  The potential impact of workforce attrition on the financial health and (for publicly held companies) market capitalization of the organization;
  •  The need for workers, older and younger, with the mathematical, problem-solving, technical and communications skills to operate a world-class utility’s assets, particularly as next-generation technologies become available;
  •  The value proposition of knowledge management and its most effective applications to training, staff development and knowledge retention;
  •  The ever-escalating cost of workforce turnover, which some sources have pegged as high as double the salary of the employee in the position and which will only move higher as competition for workers increases; and
  •  The benefits and drawbacks of changes to retirement benefits, retirement policies and post-retirement employment programs, which can keep key knowledge circulating in the organization, or drive it out in droves.

Each utility has company-specific attrition data, organization demographics, training development, knowledge capture and bargaining union initiatives to address. Unlike other industries faced with aging workforces, shortages in skill sets are painfully more obvious in services provided to the public. Major system failures like the 2004 Northeast blackout not only hurt public relations, but adversely affect financial health through such aspects as loss of revenue, increased pressure from competitors and regulators, and penalties and fines.

Solutions

Effective solutions to address the impact of workforce retirements will not come easily. However, the problem can be addressed in a structured manner by considering what can be done in the short term (now), the midterm (six to 18 months) and the long term (18 monthsplus). If these time frames may seem condensed, consider it a reflection of the nature of the accelerated action this first-of-a-kind workforce transition requires.

Short Term
To help ensure that aging workforce mitigation programs have a solid foundation, evaluation of the aging workforce impacts specific to each organization and knowledge retention efforts should begin now.

Quantifying the impact of retirements on institutional knowledge and understanding the actions that will be required in the future to mitigate the impact of the losses are critical first steps to helping ensure a smooth generational transition. There are structured processes for initiating aging workforce planning, which provide utility organizations with a structured set of templates, questionnaires and interviews to accomplish this important goal. Using these tools, a utility’s executives identify and document "core/critical" positions based on initial orientation sessions and follow-on sponsor discussions. Surveys for key workforce populations are prepared and executive interviews are conducted to gauge executives’ views of the future state of the organization and how an aging workforce strategy might be tied into those views.

Next, workforce profiling is done with the surveys and future-state vision of the executive team as a foundation. The data are reviewed to identify trends and patterns and the team seeks to determine key aging workforce vulnerabilities across key job occupations. A risk assessment may be performed including both the timing of the attrition and the position’s criticality.

From here, a utility can choose to take an "early intervention" step to quickly start mitigating current aging workforce problems while longer-term strategies are developed. A typical first step is some type of formal knowledge retention initiative. Typical elements of such a program include:

  •  Central knowledge repositories accessible to virtually all staff;
  •  Communities of practice addressing key business challenges;
  •  Mentoring and apprenticeship programs to build important technical and leadership skills;
  •  A directory of subject matter experts to whom staff can turn for specific problem-solving needs;
  •  Implementation of a content life cycle management system to electronically file, index and store data in key areas such as regulatory, asset management and operations;
  •  Creation and dissemination of a knowledge process map to direct solution seekers to applicable content and experts;
  •  Initiation of a lessons-learned capture process to document knowledge from retiring employees regarding critical issues (e.g., safety, regulatory, design and start-up); and
  •  Fostering organizational change management to build and sustain a culture of collaboration and knowledge sharing.

While initiatives in planning, knowledge retention and training can yield both short- and long-term benefits, Jim Hunter, the utility director for the largest union in the industry, the International Brotherhood of Electrical Workers (IBEW), pointed out in a January 2006 interview that the workforce doesn’t stop aging while these activities are conducted. Hunter knows the impact of the problem firsthand as he has seen his organization’s membership decline by 30 to 35 percent over the past decade due primarily to attrition.[7]

Increasingly, utilities are realizing these "quick win" measures are only part of the solution; more comprehensive midterm and long-term steps are necessary to truly gain the upper hand in the lengthy struggle to come.

Midterm
While the baby boomers prepare to move on to their post-work life, the generation Xers of today are on the cusp of taking over the reins of the company’s key managerial and executive positions. What wages, benefits and work environment will maintain an effective skills mix thorough this transition? How well-prepared is the company’s next generation of leaders? These types of questions lead to retention, leadership and work design/planning as the next focal areas beyond the critical short-term needs.[8]

Employee retention and leadership development programs will perhaps be the most critical pieces of the aging workforce mitigation strategy for one simple reason, skill sets that aren’t lost don’t have to be replaced. These strategies must be built on three pillars:

  •  Retention of current early- and mid-career workers, whose knowledge is obviously very valuable to their employer but is also valuable to a host of other potential suitors from inside and outside the industry;
  •  Leadership programs that identify the high-potential individuals suited for key roles provide targeted mentoring and help ensure the compensation and work flexibility packages these future leaders can be offered meet the new expectations of this generation;[8] and
  •  Creation of incentives and programs (e.g., part-time or advisory positions) for retiring workers to stay on a few years longer to help manage this generational transition.

The other midterm component, organizational and work planning, facilitates more efficient staffing for the work that needs to be done to help ensure safe, reliable and profitable operation in the future. Evaluation areas include:

  •  Spans of control for managers (most likely by benchmarking versus other companies as there is no one mathematical formula for "calculating a proper span of control");
  •  Supply chain and other logistics which can create delays in starting or executing work; and
  •  Work planning, route scheduling and creative work arrangements (e.g., work-from-home technicians).

Most utilities would admit there is still room for improvement in these areas, but this should be seen as an opportunity to apply a little creativity and flexibility to potentially create tremendous benefits.

Long Term
Longer-term solutions will be the toughest, particularly to investor-owned utilities whose shareholders want to see ever-decreasing quarter-overquarter costs. These will take investment; however, executives that have the vision and salesmanship to explain the benefits to shareholders in the long run will be viewed in retrospect as having been superior stewards of the company through what may be its most challenging era. While not the only long-term investments needed to survive the generational transition, those made in technology and educational partnerships are likely to be the most significant.

In addition to providing new tools to keep skilled, older workers in the mix and retain critical knowledge, technology is likely to play an increasingly vital role in attempts to reduce the costs of running a safe, reliable and profitable utility company. A good example of this is the concept of the intelligent network.[9] Among the many potential benefits foreseen from the next-generation electric grid are:

  •  Data gathered from the network are used to guide more effective asset management;
  •  Investment is focused on components and systems approaching their optimum sustainable capacities or the end of their actual lives;
  •  Real-time reconfiguration of the network allows components to operate within their safe capacity limits to enhance long-term reliability and infrastructure life; and
  •  Real-time information provides detailed information on faults, keeping outages as short as possible.

This will have much impact, which could reduce both system life cycle costs and the number of employees required to keep the transmission and distribution infrastructure in top operating condition. Smarter asset management can mean fewer hours spent on emergency repairs (and restorations related to service failures due to these). Fewer and shorter outages can help reduce labor hours expended on emergency restoration. Possible regulatory penalties, which will likely increase over time, may potentially be reduced.

For decades, the utility industry had been seen as staid and far from the cutting edge. Whether that was true is debatable, but the necessary responses to the aging infrastructure and the volatile energy markets make that decidedly not the case in the 21st century. So while utilities reap financial and operational benefits from application of new advances in the state of the art, they should not miss the opportunity to use them as a powerful and timely recruiting tool as well.

To address educational gaps, the industry is stepping up its influence on the nation’s classrooms to help ensure a population of motivated and well-educated young men and women who see great promise in a career in working for utilities. This is in response to the alarming realization that today’s high school graduates are increasingly short on mathematical, communications and mechanical skills necessary to effectively perform the types of operational and maintenance work that utilities require. The most high-profile effort to achieve this goal is the Utility Business Education Coalition (UBEC), a national alliance of leading electric and natural gas utility companies committed to elevating the visibility of workforce development as a strategic business imperative. However, a number of utilities, such as Progress Energy, Cinergy, PSE&G and AEP, have launched successful partnerships on their own at levels ranging from K-12 through graduate programs to help foster these skills and build awareness of career opportunities with utilities.

Conclusion

Utilities have a huge investment in employee experience. Managing the transfer of these skill sets as a part of a succession strategy concurrent with managing operations may require innovative investments in human assets. Utilities that successfully manage these changes will set and employ a strategy that incorporates short-term, midterm and longterm approaches. Utilities need to start today to quantify the magnitude of the problem and undertake knowledge-retention efforts, quickly followed by work redesign, staff retention and leadership planning. In the longer term, utilities should look to technology investment and deployment and educational partnerships to sustain an effective workforce, and a profitable, safe and reliable future for the company.

Endnotes

  1. For articles related to the aging workforce, see "Ergonomic Challenge: The Aging Work Force" by Stephen G. Minter in the Sept. 2002 issue of Occupational Hazards; "The Aging US Workforce and Utilities Industries" by UTC Research, United Telecom Council, March, 2004; and "Brain Drain: Our Graying Utilities" by Arthur O’Donnell in the Nov/Dec 2004 issue of EnergyBiz.
  2. Ray, Dennis and Bill Snyder. "Strategies to Address the Problems of Exiting Expertise in the Electric Power Industry." IEEE International Conference on System Sciences, 2005.
  3. O’Donnell, Arthur. "Brain Drain: Our Graying Utilities." EnergyBiz, November/December 2004.
  4. "Technical Analysis of the August 14, 2003 Blackout: What Happened, Why, and What Did We Learn?" The North American Electric Reliability Council (NERC) Steering Group. NERC. July 13, 2004.
  5. O’Donnell, Arthur. "Brain Drain: Our Graying Utilities." EnergyBiz, November/December 2004.
  6. Minter, Stephen G. "Ergonomic Challenge: The Aging Work Force." Occupational Hazards, September, 2002.
  7. The IBM authors of this article, Patty Bruffy and John Juliano, interviewed Jim Hunter, Utility Director, International Brotherhood of Electrical Workers (IBEW), on January 6, 2006.
  8. For a perspective on the significant differences in the expectations of generation Xers versus those of baby boomers, see "Teaching Them the Business" by John Juliano in the June 2004 issue of Public Utilities Fortnightly.
  9. For more information, see "The Intelligent Power Grid" by Jeffrey Taft, on page 74 of this publication.