How well does a utility perform its distribution operations? The answer to
that question depends in part on what you are trying to achieve. Most utility
stakeholders (customers, regulatory bodies, local governments, staff and owners)
formulate an answer based on some combination of reliability, cost, power quality
and safety.

Let’s review how these factors can be evaluated as they relate to the performance
measuring, monitoring and reporting challenges facing today’s electric distribution
company.

From Data to Decisions

Critical to the evaluation of the success or failure of any business process
is accurate knowledge of how the process performed in the past, how it is performing
now and how it is likely to perform in the future. That knowledge can arise
only from data.

But amassing this data, transforming it into knowledge and communicating it
to stakeholders can be a daunting task. This raw data must be captured and transformed
into meaningful information that can be understood by the stakeholders. The
stakeholders need to then process the information in the context of their existing
knowledge, building a broader knowledge base from which to make decisions about
tactical corrections and strategic modifications.

Once this knowledge has been cultivated, cause-and-effect relationships can
be investigated to determine the drivers behind absolute performance and performance
trends. The electric distribution company is no exception to this proven method
of achieving organizational excellence.

The Complexity of Data Assembly and Analysis

Measuring
In the area of electric distribution operations, key performance indicators
(KPIs) have traditionally taken the form of engineering calculations that provide
an indication of how well the delivery system itself (“the wires”) succeeds
at “keeping the lights on” and how well operations responds when the inevitable
failure does occur (restoration). These measures are commonly referred to as
reliability indices and are intended to measure performance from the electric
customer’s perspective (i.e., customerbased measures).

Distribution operations typically “perform” in one of two modes of operation
– normal or abnormal. Abnormal operation stems from the fact that electric distribution
systems operate in harsh environments that are beyond the control of the utility.
These systems are inherently exposed to nature’s elements, including animals,
vegetation and weather, as well as geological and human incidents. Since it
is cost prohibitive to build a system that operates flawlessly in this environment,
each utility has adopted design, construction and maintenance standards based
on an acceptable trade-off between cost and performance as determined by their
regulators and/or customers served.

Incorporated in the development of these standards are limits related to the
harshness of the environment that loosely define “normal” operating conditions.
The “lights are expected to stay on” while operating within these limits. The
environmental harshness will occasionally exceed these limits (termed “major
event”), changing the operating mode to abnormal and altering performance expectations.

Standard Practices
Consistent measurement practices with defined methodologies and terminology
are critical to meaningful tracking and analysis of electric distribution performance.
Not only required in support of internal distribution company decision making,
analysts, investors, regulators, owners and large commercial/industrial electric
customers regularly compare performance between different companies in support
of their own internal decision making.

The
Institute of Electrical and Electronics Engineers (IEEE) is the commonly accepted
authority on standardization in this area and has published the IEEE Guide for
Electric Power Distribution Reliability Indices IEEE Std. 1366™. Industry surveys
conducted by the IEEE/PES Distribution Subcommittee Working Group on System
Design have set the most commonly used customer-based indices (see Figure 1)
and in order of frequency of use. These calculations transform raw interruption
data typically logged during the day-to-day operation of the electric distribution
system into meaningful information.

Transformation of this information into a knowledge base requires further augmentation
with operational data related to the cause of each interruption, operating mode
at the time of each interruption, type of isolating devices involved in each
interruption, and type of any electric system component that may have failed.

A recently submitted white paper by the IEEE/PES Distribution Subcommittee
Working Group on System Design, titled, “Collecting and Categorizing Information
Related to Electric Power Distribution Interruption Events: Data Consistency
and Categorization for Benchmarking Surveys,” further defines a minimum set
of data collection categories required for benchmarking (see Figure 2).

The resulting collection of knowledge provides a set of powerful information
on which to base operational, engineering and financial decisions across the
entire electric distribution enterprise.

Monitoring and Reporting
Simply measuring performance and building a base of knowledge does nothing,
in and of itself, to affect performance. Performance must be monitored over
time for the measures selected to be useful in steering processes in a direction
that will result in organizational success. Winning decisions require knowledge
of “where we are” as well as “where we are heading.”

Performance monitoring and reporting methods vary depending upon the business
needs of the particular function or process that is to benefit from the available
knowledge. Required periodicities of access and information granularity are
important factors in determining the best methods for the accumulation, structure
and dissemination of knowledge. Static annual reports of KPIs may suffice for
some planning functions but operations management functions typically demand
dynamic access to current information; not only to the KPIs, but also to the
raw data that surrounds and impacts the KPIs.

A diverse range of performance monitoring and reporting needs exist within
the electric distribution enterprise:

  • Predefined reports of aggregate information about performance over relatively
    long periods of time (quarter-to-quarter, year-to-year) typically meet the
    needs of performance-based regulation and regulatory compliance reporting;
  • System planning and design functions need similar reports of like information
    but also benefit from more interactive reporting methods;
  • The ability to view information from various system perspectives is necessary
    to support reliability planning efforts;
  • Maintenance functions typically require more granular information about
    performance over somewhat shorter periods of time (week to week, month to
    month) and need even more interaction with the raw data;
  • The ability to view information from various equipment perspectives is necessary
    to support reliability-centered maintenance efforts; and
  • Operation planning functions require very granular performance information
    over even shorter periods of time (hour to hour, day to day) and need maximum
    flexibility in reporting.

All of these functions can benefit from, if not require, the ability to filter
specific abnormal occurrences or accepted normal occurrences from the KPIs and
other information reported. For example, comparison of the performance of a
particular system design against related standards could be skewed if data from
events occurring outside of the predefined limits within the standard (e.g.,
major events or scheduled interruptions) are not excluded. This could result
in unnecessary and costly upgrading and overbuilding of the distribution infrastructure.
Obviously, the definitions of excluded occurrences are critical to accurately
evaluating and comparing KPIs and related information.

When
a major event occurs on the electric system resulting in customer interruptions,
the required periodicity of access to information increases to less than hour
to hour, the required granularity of information increases significantly and
the number of interested parties increases by orders of magnitude.

  • Operations management and support personnel need current information, including
    information down to the individual electric customer level, on which to base
    the minute-to-minute decisions necessary to assist operators/dispatchers in
    quickly and safely restoring electric service;
  • Corporate communication personnel need similar information for dissemination
    to the media and response to direct public contact;
  • Planning, design, maintenance and other personnel may be called upon to
    lend assistance in developing restoration plans and communicating with external
    entities; and
  • Depending on severity of the event, regulatory and emergency management
    organizations may require access to current information.

Technology Challenges

Measuring, monitoring and reporting performance presents several technology
challenges that cannot be adequately addressed by traditional operational IS
approaches. Although additional users could be added to the operational system
to gain access to the operational data, each additional hit to the operational
database degrades performance, however minimally, of the application programs
that depend on the operational database. Users interested in the data, as opposed
to the application program results, will typically generate many, many more
hits per unit of time than the application programs for which the database was
designed. Furthermore, these additional hits will tend to occur during the same
time as maximum usage of the application programs (normal working hours).

Neglecting IS performance, direct cost and physical implications, opening access
to the operational data on a broad scale through the operational system presents
additional security risks that could directly and negatively impact the operational
system itself. Preventing breaches would significantly add to the challenges
of the system administrators.

Consideration must be given to the functionality of the data analysis and reporting
tools available. Custom development and maintenance of these tools is very costly.
Without creative solutions, these challenges lead to severe limitations and
ultimately abandonment of access to operational data – the exact opposite of
what the organization

The
Data Mart

Helpful in understanding the data mart solution is a comparison of operational
system data (see Figure 3) and business management environments (see Figure
4) to that of the data

Fundamental to the proper design of a data mart solution is a thorough understanding
of the requirement issues dictated by the specific implementation under consideration.

Requirements typically fall into one of three categories:

  • The business function and scope requirements definition asks questions about
    which specific business problems in what part of the enterprise are to be
    addressed;
  • The data requirements definition asks questions about characteristics of
    source data and the needs for its extraction, refinement and re-engineering;
    and
  • The access and usage requirements definition asks questions about who will
    use the solution, when will they use it, what will they use and how will they
    use it.

Business Function and Scope Requirements
These various functional areas that can benefit from a distribution operations
data mart have varying needs. These needs affect data mart design with respect
to dimensional analysis, granularity of information and temporal analysis.

Dimensional
analysis (see Figure 5) involves determining how to best examine or “slice and
dice” the information captured to best address the business problems at hand
in a particular area of the enterprise.

Dimensions relate to things such as time, space, frequency, etc. Figure 6 illustrates
some possible examples of time and space dimensions in the realm of electric
distribution operations. Other possible dimensions are: number of interruptions,
number of customers out, etc.

Multidimensional analysis (see Figure 7) entails “slicing and dicing” by a
combination of multiple dimensions. Some examples are: outage duration and reliability
index by outage cause; daily trouble by branch by region by company; reliability
indices by year by feeder by substation by branch; momentary events by feeder
by substation by branch, etc.

Granularity
analysis (see Figure 8) refers to determining how much detail is required in
the data to meet the business requirements of the data mart. Different business
functions typically require different levels of granularity with varying levels
of summarization and/or aggregation. The importance of each of these requirements
must be balanced against the cost of the IS necessary to provide them.

The granularity of the time unit against which data is required to be captured
and analyzed must be scrutinized (e.g., this year, last year, this quarter,
last quarter, today, etc.). Temporal distortions can occur due to the differences
in the rate of change of the various dimensions. For example, changes to normal
circuit topology and customer information typically occur at a slower rate than
changes in abnormal device states.

The data of interest from electric distribution operations ranges from the
transactional level (customer call) to the aggregate level (outages by district)
and from the historical (last year) to the current (now), each having different
impacts on the data mart design.

Access and Usage Requirements
Different users require varying levels of access to, and views of, the information
contained within the data mart. Management is typically interested in easy retrieval
of predefined information summaries at multiple levels to support decision making.
Other business users are typically interested in the added ability to “massage”
the information and vary the views, as well as access to detailed, achieved
data.

A powerful tool for the business user that is enabled by a properly designed
data mart is Online Analytical Processing (OLAP). This tool provides for multidimensional
analysis using drill-down and roll-up techniques as well as iterative analysis
of data by changing the order of dimensions.

Summary

A comprehensive analysis of the combined set of issues leads to the conclusion
that the application of data warehousing concepts is fundamental to the development
of a robust solution. These concepts include warehousing data offline from the
operational database, organizing data for efficient data analysis and reporting,
opening access to operational data on a broad scale at minimal cost and isolating
business users from the operational system. In addition, the popularity of data
warehouse solutions has spawned the development of powerful, readily available,
cost-effective data analysis and reporting tools.