Intelligent utility networks (IUNs), also
known in the electric power industry
as intelligent grids, smart grids or
modern grids, make use of large numbers
of sensing points and intelligent devices
to greatly increase the observability of
the grid state, device states and quality
of delivered service. Utilities are learning
to use this massive flood of new data
to make significant improvements in the
three primary functions of the utility:
delivery of reliable, high-quality power,
support for sophisticated customer
services and advanced work and asset
management. The amount of data that
an intelligent utility network may produce
can only be handled by automated
analytics, since there is far too much
data streaming in at high data rates for
humans to comprehend and act upon
directly.

We define analytics as software tools
that transform data into information
that can be acted upon, in the forms of
automatic controls, decision support or
performance indicators that influence
operations or planning. In the past, utilities
have often used stand-alone analytics
systems that had little or no ability to
integrate with business systems and other
utility applications and had limited ability
to expand or scale. However, with the use
of a solid architectural framework and
modern technology support, utilities can
implement flexible and scalable analytics
systems that enable them to realize the
full value of their investments in intelligent
grid infrastructure.

To appreciate the need for these technologies,
we review some key aspects of
intelligent utility networks, starting with
the nature of the utility assets themselves.
We will then look at the infrastructure that
transforms traditional utility infrastructure
into an intelligent utility network,
and then will examine technologies that
support the implementation and operation
of advanced analytics for the intelligent
utility network.

Utility Asset Characteristics and Intelligent Utility Network Structure

Utility assets have several important distinguishing
characteristics that impact
the nature of analytics technology:

  • They should operate continuously
    (24/7/365);
  • They are geographically distributed;
  • They have a definite hierarchical structure;
    and
  • It takes a great many sensing points
    and analytics to make these assets fully
    observable; a few key performance indicators
    (KPIs) are not sufficient.

Figure 1: Analytics Hierarchy for a Transmission and Distribution UtilityConsider an electric transmission and
distribution utility as an example. At the
logical top of the hierarchy we have the
business operations. Below that, we have
the control centers for transmission and
distribution. Below each of these we have
substations and the equipment contained
therein. On the distribution side, the hierarchy
continues downward to the feeder
circuits and associated devices, and to the
customer meters. If we consider the analytics
necessary to characterize fully this
set of assets and associated operations,
we see a matching hierarchy, as shown in
Figure 1. Arrows indicate the flow of analytics
results.

What is not as clear from the diagram,
but is still eminently true, is that operational
time scales for analytics shorten
as we move down the hierarchy. At the
feeder circuit level, we may require analytics
to operate in milliseconds, whereas at
the enterprise level, we may only require
analytics to operate on a weekly, monthly
or quarterly basis. One exception here
is that billing-related meter functions
do not need to operate on millisecond
time scales. However, in cases where the
meters are to be used as grid sensors (for,
say, outage detection/localization or gridstate
monitoring) then the more rapid
times scales do apply to those analytics.

The implications of such a logical and
temporal hierarchy with geospatial distribution
of assets is that we must use technologies
to support analytics that provide
for distributed sensing, processing and
communications, as well as geospatial,
temporal and topological (grid connectivity)
awareness.

The distribution issue is especially
important. There are certain analytics
that can only be implemented in a
centralized fashion, since they are inherently
global in nature, such as system
performance metrics. Others, however,
are essentially local in nature and can be
computed right at the sensing points with
smart sensors or smart RTUs (remote
terminal units) and then reported out to
applications and repositories as needed.
Examples of local analytics include RMS
voltage, THD, and real and reactive power
flow. In some cases, analytics are better
implemented in a partitioned fashion, with
some elements being computed locally,
and some elements being computed in a
centralized server. This is especially true
for analytics that assemble a global view
of system performance from a number
of localized but complex measurements,
such as for high impedance fault location
via distributed sensors.

Below we review a number of advanced
technologies supporting the implementation
of analytics solutions for IUNs.

IUN Component Technologies

Data Sources
A wide variety of sensing devices is available,
and they increasingly include embedded
intelligence. From sensors connected
to microprocessor relays in substations
(also known as intelligent electronic
devices or IEDs) to smart grid devices
such as intelligent reclosers and capacitor
bank controllers to line monitors with or
without smart RTUs, there are many ways
to obtain measurements on service delivery
and on device status and health. For
electric distribution grids, these devices
generally provide data on grid state (voltage,
current, real and reactive power, etc.),
device state and device stress history,
power quality and power reliability, faults
and failures, and safety conditions. Key
technologies here are device-monitoring
tools, software tools that support remote
programming and application download
for flexible distributed intelligence, digital
communications interfaces and IEEE
1451-based transducer electronic data
sheet (TEDS) services.

Data Transport
There are over two dozen communications
technologies that can and are used
by utilities and it is not unusual to see a
utility use six or more simultaneously. The
key issue here is that utility data communications
networks that support advanced
utility analytics must be TCP/IP-enabled.
This provides the necessary flexibility
and interoperability to support sensor
data transport, network management,
data security services support and smart
device management.

Data Storage for Analytics
The nature of utility analytics is that they
are sensor-data-driven, and ordinary relational
databases are not good at handling
such data streams. For utility analytics
data, there are three primary data storage
technologies: data historians, meter
databases and CIM-structured data
warehouses. Data historians use special
formats to store sensor data and are
capable of keeping years’ worth of such
data (up to multiple terabytes) online and
rapidly accessible. Data historians may
be either centralized or distributed. CIMstructured
databases use the utility Common
Information Model as the basis for a
data warehouse that contains data from
a variety of utility sources and provides a
store against which analytics may be run
without loading down other utility databases
or applications. CIM also provides
an open-standard data model schema for
utilities, which avoids proprietary database
formats and goes far toward guaranteeing
interoperability with newer utility
control systems.

Meter databases have typically been
siloed in the past and have been managed
by meter data management systems.
However, both the meter databases and
data historians can be federated to CIM
data warehouses via middleware tools
made specifically for such database
integration. In this way, analytics can be
built to access only the data warehouse,
with data being automatically fetched
from the historian or meter database as
needed without the need to copy large
volumes of data from either of these
specialized databases into a relational
database (something that would overload
the relational database system easily).
In addition, some meter data collection
systems support multiple-event subscribers,
thus allowing a meter data management
system to get usage and event data,
while also allowing other systems, such
as outage intelligence systems, to have
simultaneous near-real-time access to
event messages.

Integration Buses
In the past many utility analytics have
been created to operate in stand-alone
fashion, and any integration among them
has been “swivel-chair integration,” where
the user manually transfers data or commands
from one screen to another. The
enterprise software integration bus, especially
in the context of a services-oriented
architecture (SOA), provides a basis for
integrating analytics to utility applications
and back-office applications in a way that
preserves performance and modularity,
and provides vendor independence by isolating
the effects of changing or replacing
any particular application or analytic tool.

For utility analytics systems, we recommend
a dual bus arrangement, where
a standard enterprise integration bus
provides connection among analytics
and enterprise systems, and a second
event-processing bus handles the higher
bandwidth data transport and rapid event
response traffic. This extended SOA
approach for utility analytics systems
supports both centralized and distributed
analytics processing, as well as providing
mechanisms for machine-to-machine
(M2M) communication for automatic use
of analytics outputs in control applications
and in support of composable (compound)
analytics services.

Event Correlation and Notification
Several technologies support analysis
of events as represented in sensor data.
Generally speaking, sensor and event
data must be time-correlated, so data
should be time-stamped. Newer utility
devices make use of GPS timing information
to provide precise and accurate time
correlation. Many analytics also require
geospatial correlation (to determine if a
lightning strike near a substation caused
a circuit breaker trip, for example).
Geographic information systems (GIS)
are used by most utilities to track asset
locations. With proper integration, GIS
databases can augment both real-time
and post-event analytics. Connectivity
models (which are inherent in CIM-structured
databases and also exist in various
forms in energy management and distribution
management systems) provide
the necessary topological information
for event correlation. Combined, these
technologies yield the ability to analyze
grid events through three search criteria:
a time window, a geospatial window and a
connectedness window. Event correlation
tools that perform in these three search
dimensions greatly ease the problem
of analyzing complex events in a utility
transmission or distribution system.
In addition to post-event correlation
analysis, utilities must monitor a great
many data points and a great many analytics
that derive from measured data.
It is impractical and ineffective to have
people monitor screens full of streaming
numbers, so automated configurable notification
engines must be used to scan the
data and analytics outputs continually and
then generate notifications to the right
parties when events occur. Event-processing
technology can supply tools that perform
such monitoring and notification on
a subscription/configuration basis. Such
event notification can be implemented in
data historians or in separate event-processing
services.

General Analytics Tools
Several general-purpose software
technologies have proven useful in analytics
systems in other industries and
fit into the context of utility analytics
as well. These include online analytical
processing, known as OLAP, and its ancillary
tool, the cubing engine. These tools
provide the means to rapidly examine a
multidimensional data set from various,
possibly rapidly evolving viewpoints so as
to obtain a clear visualization of the inherent
meaning of the data. Separately, data
mining technology provides the means to
sift large volumes of data automatically to
identify patterns and trends that a person
might never recognize in a mountain of
data. Data mining technology is typically
used to assist offline analysis in support
of strategic planning and long-term trend
analysis or event correlation.

Analytics Management
In a modern utility analytics environment,
there are so many rapidly updating
analytics, metrics and key performance
indicators that it is necessary to provide
tools to support analytics management.
Analytics management entails three functions:
control of access to analytics based
on job roles; subscription to analytics by
users on an ad hoc basis; and configuration
of subscribed analytics on a user-byuser
basis. Analytics management tools
provide the means for a user to subscribe
to a particular analytic and have it delivered
to the desktop or to email or a pager
service as needed and then unsubscribe
when the need for that particular analytic
is past. This becomes especially important
when tens of thousands of data points are
being measured and analytics are being
derived from these data points.

Results Distribution to Humans
Many of the analytics results must be presented
to people to support various decision
and actions. Appropriate technology
for such presentations includes portals,
dashboards and notifications via email,
pager or cell phone. Portal, dashboard
and related technologies have become
quite common in advanced information
systems and in most back-office systems.
Their use in utility analytics systems is
more recent but is growing rapidly and
represents a de facto standard approach
to distribution of analytics and KPIs.
Distribution of notifications via email
and pager is also common in monitoring
systems – we extend the concept to monitoring
of advanced analytics in addition to
basic variable threshold crossings.

Analytics Architectural Framework

Figure 2: Example Analytics Architectural FrameworkIt is not enough to have a selection of
technologies available for use in intelligent
grid analytics implementations.
These components must fit into a framework
that provides the environment to
integrate existing and future applications
and analytics into the operating
and business environment of the utility.
The architectural framework defines the
integration schema, the relevant communications
standards, how analytics are
managed, how multiple vendor analytics
tools are integrated and how analytics
results are distributed. Figure 2 shows an
example of an architectural framework
for utility analytics.

The architectural framework seems
complex at first glance, but its overall
structure adheres to three primary principles:
use of the SOA with an enterprise
bus for integration at the business services
level; use of a second event-processing
bus for integration of the real-time
event data; and integration of data from
a variety of sources into a CIM-structured
data warehouse. The framework has provisions
for data management, analytics
management, and network and device
management, as well as data security
services. The framework supports both
centralized and distributed analytics
and allows for variable trade-offs in the
degree of distribution.

Note that this reference architecture is
a starting point for the development of a
utility analytics solution. It does not represent
a shrink-wrapped, out-of-the-box
solution. Each utility must be prepared to
customize any such reference architecture
to its unique infrastructure and needs.
It does, however, represent an excellent
starting point in developing an appropriate
end-to-end utility analytics solution.
The SOA approach inherently implies use
of a delimited set of open standards for
communications, and this is extremely
important in creating scalable, modular
M2M and process-to-process or service-toservice
communications.

Many commercial products are
available to support the middleware
and database functions implied in this
framework, and the framework is
designed to support the integration of
utility and third-party analytics tools
and functions in a vendor-independent
fashion. This provides the utility with
the ability to protect its investments
and not be locked in to a single vendor,
protocol or tool set.

Conclusions

Utility analytics are becoming more
sophisticated and at the same time
more widely used throughout the utility.
As data volumes increase from intelligent
utility networks, smart grids, etc., so
increases the need for technology to manage
the data flood and the analytics that
convert the data flood into usable information.
Key technologies, such as IP-enabled
digital communications, software integration
buses, CIM-structured data warehouses,
data historians, event-processing
tools, networked device management
tools, machine-to-machine communications,
portals and dashboards for human
interfaces, and even analytics management
tools are crucial elements of a
successful utility analytics system implementation.
All of these technologies benefit
from an analytics architectural framework
that provides scalability, variable
distribution and modularity, thus ensuring
flexibility and therefore protection of the
utility’s investment.