Locking the Door Against Cybersecurity Attacks

Situation Overview

The biggest threat today to the
transmission and distribution
(T&D) business’ cybersecurity
is not necessarily a virus, worm or terrorist.
While these can all be significant
threats, the culprit is often the company
itself. Particularly as utilities open more
doors to their network automation
and control systems – such as by deploying
more intelligent devices – they can
make themselves more vulnerable to any
cybersecurity threat. Although utilities
can mitigate vulnerabilities, these actions
come at an additional cost.

This paper examines recent developments
in cybersecurity for network
automation and control systems. First,
we review how recent cost-cutting
initiatives and new technologies are
creating cybersecurity problems. We
then look at how the North American
Electric Reliability Corporation’s (NERC)
new cybersecurity standards, CIP 002-
009, may help utilities address these
problems but also create additional
costs. Lastly, the report discusses how
utilities can cost-effectively strengthen
their cybersecurity.

Figure 1: Cost-Cutting Efforts and New Technologies That Open More Doors to the NetworkMore Cybersecurity Problems

Today most energy delivery businesses
have moved beyond worrying about
specific cyberattacks, which could be
launched by almost anyone from “kids in
a basement” to a company insider. Instead
many are now concerned about how
cost-cutting efforts and the introduction
of new technologies could increase their
vulnerability to any attack, no matter
where it originates. There are two key
corporate efforts that can raise significant
cyber-security issues:

  • Obtaining greater real-time visibility of
    and access to remote assets; and
  • Implementing more open, standardized
    real-time systems.

Greater Visibility and More A ccess Points

From intelligent grid initiatives for better
visualizing and reacting to real-time
events to cost-cutting efforts, utilities continue
to open more doors to their control
networks (see Figure 1).

Specific efforts that could create cybersecurity
problems include:

  • More points of access to the grid: Many
    utilities are using more intelligent electronic
    devices (IEDs), smart meters and
    sensors to connect with the field better
    – an important part of a fast-reacting
    intelligent network. Furthermore, utilities
    are allowing more remote access
    to their delivery system networks by
    telecommuters and third-party contractors.
    These efforts don’t just open more
    connections to system; they also let in
    remote parties that may not have sufficient
    cybersecurity systems of their
    own in place.
  • More standard communication networks:
    Although IP-enabled SCADA and
    wireless networks offer many benefits,
    they also create problems. On the positive
    side, an IP-enabled SCADA system
    provides a low-cost alternative to a proprietary
    SCADA system. It also allows
    for better deployment of intelligent grid
    technologies through easier integration.
    Intelligent grid efforts are also pushing
    utilities to use more wireless communications
    (e.g., WiFi, WiMAX and GPRS).
    The problem with these communications
    methods is that they spread out
    standard communication channels into
    the service area, creating more ways
    for cyberthreats to enter the system.
  • More integration with corporate networks:
    To support data-sharing and
    decision support and, in some cases,
    to reduce costs, more utilities are integrating
    their control networks with the
    corporate network. This is troublesome
    because it allows even more traffic on
    and accessibility to the corporate network,
    which means a greater possibility
    of cyberthreats entering the network
    automation and control system.
  • Greater reliance on commonly used
    platforms: It is becoming increasingly
    difficult for IT departments to find
    professionals competent on platforms
    such as Unix, the traditional operating
    system for network automation and
    control systems. These platforms have
    substantially longer learning curves and
    require a costlier workforce. As a result,
    many utilities are turning to more commonplace
    platforms instead. Standardizing
    on platforms such as Microsoft
    Windows offers many benefits, but such
    platforms are also common targets for
    cyberattacks.
  • Vendors have not made cybersecurity
    a top priority: Today many utilities rely
    on standardized commercial applications
    rather than pay for expensive
    customized systems. This creates
    vulnerabilities because many network
    automation and control vendors have
    not built adequate cybersecurity measures
    to protect those applications and
    attackers can readily obtain information
    about how to crack them.

New Regulations Will Now Force Utilities to Take Action

Although the initiatives discussed above
create cybersecurity problems, utilities
can address them by implementing some
additional procedures. Aside from the
direct operational benefits that come
from greater cybersecurity, today’s utilities
have an even stronger incentive to
take action. NERC and the Federal Energy
Regulatory Commission (FERC) released
the Critical Infrastructure Protection (CIP)
standards for cybersecurity, which went
into effect on June 1, 2006. Because most
utilities are familiar with CIP 002-009, this
paper focuses on a few key problems these
standards will likely create for utilities.

Defining and Interpreting a Critical Cyber Asset

Today’s CIP standards provide a more
detailed critical cyber asset (CCA) definition
than previous versions. The standards
require utilities first to define their critical
assets and then to identify CCAs that are
essential to their function (see Figure 2).
However, according to the NERC, CCAs
can now also include any cyber asset that
“uses a routable protocol to communicate
outside the electronic security perimeter;
or … uses a routable protocol within a
control center; or … is dial-up accessible.”
For example, modems that contractors
use to access the network automation and
control system connected to an otherwise
noncritical cyber asset could now qualify
as a CCA. This brings us to another problem
with the new CCA definition. Although
the definition is now more specific, there
are still varying interpretations of its
meaning. For example, although one utility
recently identified 300 CCAs, another
comparably sized utility found approximately
three times as many.

This more detailed CCA definition opens
up the possibility that a cyber asset not
essential for a critical asset to function
will be subject to these regulations, simply
because it has a routable protocol or
dial-up modem. As discussed earlier, more
and more utilities are implementing IPenabled
technologies, such as IP-enabled
SCADA. Therefore, more of those assets
may be subject to the regulations, which
wouldn’t be the case if utilities went with
older technologies. Furthermore, even
some older technologies, such as dial-up
modems, may now count as CCAs.
Ultimately, more assets subjected to
these standards means more effort and
cost for utilities to secure those assets.

Future Outlook

Figure 2: Previous CIP Standards vs. Current CIP StandardsSome of the greatest challenges for utilities
stem from the fact that demand for
these cost-cutting and new technology
initiatives are unlikely to go away. As the
need increases for better grid visibility
and improved response times to delivery
system events, utilities will demand even
more intelligent grid technologies – such
as IP-enabled SCADA systems, smart
meters and grid-friendly appliances – that
will allow for more and more access to
delivery assets, i.e., more open networks.

Another challenge on the horizon, as
more and more workers retire, is that utilities
will turn to automation to offset the
loss of staff. We’ll see a greater number
of initiatives such as the use of standardized
applications and more commonly
used platforms and corporate networks as
companies use information technology to
reduce costs and increase efficiency and
also to replace their retiring workforces.

At the same time, the new CIP standards
will continue to evolve, and the
broader NERC reliability standards will
further complicate a utility’s ability to
meet cybersecurity standards. Given the
rapidly changing cybersecurity environment
and the age of CIP 002-009 standards,
these regulations will likely change.
Moreover, CIP 002-009 is just a piece
of the NERC reliability standards. These
extensive standards cover a broad range
of issues, from resource and demand balancing
to personnel performance, training
and qualifications. These broader regulations
will put a squeeze on the resources
utilities can devote to cybersecurity.

Future Adoption Patterns

Although some utilities have complied
with voluntary cybersecurity standards,
the CIP standards will force many more to
reconsider seriously their cybersecurity
system. As a result, utilities will:

  • Have larger cybersecurity budgets.
    Despite the demands of the broader
    NERC reliability standards, CIP 002-009
    means that many utilities will have to
    step up their cybersecurity efforts – and
    spending – to become compliant.
  • Be more aware of cybersecurity costs
    and risks. Now that utilities must comply
    with regulations and deal with the varying
    interpretations of the CCA definition,
    they will take into greater consideration
    the cybersecurity costs associated
    with new projects. But even as utilities
    expand their budgets, they will not be
    able to afford every single cybersecurity
    measure available for their network
    automation and control systems. Instead
    they’ll take a risk management approach
    that weighs the probability and extent
    of risks – events that would cause
    problems for their network automation
    and control systems. They won’t have
    resources to answer every risk, so they’ll
    prioritize their cybersecurity efforts to
    address the most critical risks (e.g., risks
    that may be unlikely to occur but would
    be catastrophic to the system).
  • Demand more third-party assistance.
    Utilities will need more products and
    services – such as vulnerability assessments
    and security software packages
    – as they improve their cybersecurity.
    They’ll turn not only to cybersecurity
    vendors but also to industry-specific
    vendors to strengthen one another’s
    solutions.

Essential Guidance

Popping in a security software program
or setting up a firewall is not adequate to
protect a utility’s network automation and
control system. And new technologies and
cost-cutting efforts will continue to expose
the network automation and control system
to cyberthreats. Therefore utilities
must weigh their cost-reduction and intelligent
grid initiatives against the need for
greater security.

Utilities should not necessarily avoid
newer technologies or cost reductions out
of cybersecurity fear. They will, however,
need to determine up front what changes
to network automation and control systems
will cost in terms of cybersecurity
compliance. A new, more efficient technology
isn’t really less expensive if it requires
additional investments in cybersecurity
measures than an older technology would.
More generally, to protect increasingly
vulnerable network automation and control
systems, utilities need to consider:

  • Ongoing vulnerability assessments.
    Before a utility can secure its network
    automation and control system, it
    needs to understand its system vulnerabilities.
    Determining cybersecurity
    needs requires an initial evaluation and
    then, after developing and implementing
    initial cybersecurity strategies,
    utilities must continue re-evaluating
    their systems.
  • Vigilant monitoring of the network
    automation and control systems.
    Utilities should monitor systems on an
    ongoing basis to establish a baseline
    for normal activities. By knowing its
    baseline, a utility will be better able to
    identify unusual activity.
  • Enterprise effort. With growing connections
    between business units, cybersecurity
    does not just impact network
    automation and control systems. Utilities
    should be working across business
    units to develop a broader, more comprehensive
    approach to cybersecurity
    that addresses both control networks
    and the corporate network itself.
  • Cybersecurity is more than just software.
    Although there are effective
    cybersecurity applications on the market,
    utilities must research additional
    ways of mitigating cyberthreats, from
    knowing their users to improving their
    physical infrastructure.
  • Thinking outside the regulatory box.
    The new CIP standards provide a good
    start for cybersecurity, but they cannot
    adequately address all related issues.
    Utilities should take the time to evaluate
    additional cybersecurity recommendations,
    such as the ISO 17799 Security
    Standard.
  • Effective solutions for today and
    tomorrow. Because many cybersecurity
    measures are narrowly focused, utilities
    should implement solutions that work
    with their existing technologies and can
    also adapt to meet future technological
    needs.
  • INTERVIEW: Frank Hoss

    Frank HossEnergy & Utilities: From your perspective,
    what is the intelligent grid?

    Frank Hoss: The intelligent grid focuses
    primarily on the efficient, reliable and safe
    distribution of electricity. It’s the marriage of
    the electrical distribution infrastructure, with
    a communications infrastructure. That can
    be in the form of a number of communication
    protocols such as two-way radio frequency,
    broadband over power line, power line carrier,
    cellular or WiMax. It’s that combination that
    makes the data – which was always available on
    the distribution grid – more readily accessible
    for operating, maintenance and planning decisions.
    It also lets the utility run its distribution
    operations in a much more autonomous, automated,
    remote fashion.

    E&U: What do you see as the catalyst behind
    efforts to make the intelligent grid a reality?

    FH: A number of events have made it possible
    to focus on the intelligent grid and make it
    feasible. Data from distribution grid devices
    has always been available, but the problem has
    been retrieving it and using it remotely on a
    wide scale. The needed communications have
    been prohibitively expensive up til now. Over
    the last several years, communication technology
    has developed rapidly, so bandwith and
    wide-scale deployment are more economically
    feasible now. And many utilities are experiencing
    government or regulatory mandates to
    implement smart metering, which requires a
    very robust, wide-scale communications infrastructure.
    So utilities now have a mechanism to
    get communications in place affordably.

    And to a large extent, the distribution grid
    has been operated as a “black box,” with few
    actual data points used in grid operations.

    But with programs and initiatives like demand
    response, dynamic pricing, distributed generation,
    renewable and alternative energy sources,
    islanding, smart homes, much more needs to
    be known about the distribution grid to keep it
    operating efficiently and safely. And because
    It’s not uncommon for it to take 10 or 12 years
    from concept to execution, efficient grid operations
    are going to be instrumental in alleviating
    either transmission congestion or the generation
    capacity problems.

    E&U: What are the main challenges utilities
    face in making the intelligent grid a reality?

    FH: The intelligent grid is going to be an evolution,
    starting with mandated smart metering.
    For utilities, it’s a huge investment, and they
    want to get it right the first time. Communications
    is a major part of that. Even with widescale
    communication solutions becoming
    affordable, it’s still changing quickly, and something
    else even better may emerge in the near
    future. One good example is WiMax. Now you
    have several major telecommunications companies
    that are investing billions of dollars in
    WiMax over the next several years, which could
    make it a preferred solution for utilities.
    Also, when utilities are considering intelligent
    grid solutions, what they’re implementing
    today must position them for the operating and
    regulatory environments of 15 to 20 years from
    now. That’s why it’s so important that they do
    it right the first time, so they can leverage what
    they’re doing today for future implementations.

    E&U: How do utilities ensure the intelligent
    grid solutions they choose for today will be
    current in the fast-paced technology development
    cycle of the future?

    FH: When we talk about intelligent grid
    solutions, it’s multiple solutions that will
    need to be implemented and integrated.
    But any solution must meet three primary
    conditions: 1) the solution must be
    open, meaning there’s no proprietary
    software or code involved, so the utility
    can interface with that particular solution
    without having to go back to the vendor;
    2) the solution has to adhere to a standard
    communication protocol, as being IPaddressable
    or in line with the IEC 61850
    requirements; and 3) the solution must be
    scalable. When you talk about building the
    intelligent grid over the entire distribution
    infrastructure, you’re talking about having
    to integrate multiple solutions. The more
    they can scale, the better for the utility
    in terms of both integration and maintenance
    of those solutions going forward.

    Finally, utilities must realize the intelligent
    grid is an evolution that will take
    years to implement. They need to make
    sure that what they’re doing today will
    prepare them for 15 to 20 years down
    the road. And the intelligent grid is not
    the same for every utility, because each
    one has different drivers that will result
    in different solutions. One good way for
    the utility to prepare is to start examining
    plausible scenarios, maybe in five-year
    increments, within which they may find
    themselves having to operate at some
    point and determining how the company
    must respond to be successful. And then
    extend these scenarios to 15 or 20 years
    out. Each solution should be a building
    block or enabler for what they’re going
    to need to do over the next five years.
    It’s not a perfect crystal ball, but I think
    adherence to those three primary solutions
    and the use of plausible scenarios
    can help the utility develop a realistic
    intelligent grid road map.

    E&U: Are there any additional challenges
    facing a utility that needs to implement an
    intelligent grid solution?

    FH: I think another challenge is the sheer
    magnitude of data that’s going to be available
    from implementing intelligent grid
    solutions. It’s going to require that the
    utilities have an extremely robust data
    management capability. Probably the
    best way to show this is to talk through an
    example. If you take a utility that’s looking
    to deploy just 1 million smart meters, that’s
    going to be equivalent to about 110 million
    records on a daily basis. Of those 110 million,
    96 million would be usage records,
    with another 10 million records associated
    with voltage readings, and then an estimate
    of about 4 million records associated
    with missed readings/rereads. Add to this,
    some of the other programs and initiatives
    that are going on in the industry;
    for example, distributed generation. If 2
    percent of your customers will have distributed
    generation, that adds 4 million
    records per day for the distributed generation
    requirement. And then when you start
    looking at load management and demand
    side management, that could add up to
    another 50 million records per day. So
    when you examine this on an annual basis,
    all those records represent a combined 59
    terabytes of data, which is huge. The other
    aspect is the timing of the data. Prior to
    intelligent grid solutions, the need for the
    frequency of the data ranged anywhere
    from months down to days, and in some
    cases, maybe hourly readings. With the
    intelligent grid solutions operating the distribution
    grid in a near-real-time fashion,
    we’re now talking about milliseconds in
    some cases. This is new territory for many
    of the utilities and for the vendors that are
    providing solutions – not only to be able to
    acquire the data but to process it and provide
    it as output to wherever it’s needed
    within a-millisecond-or-less time frame.

    E&U: What’s the justification for implementing
    intelligent grid solutions?

    FH: Whether it’s being mandated or
    whether the programs or initiatives are
    under way to make utilities much more
    energy-efficient, intelligent grid solutions
    will have to be deployed. Utilities are challenged
    to provide a business case any
    time they spend money, not only internally
    but a justification that will stand up to the
    scrutiny of regulators as well as customers
    and stockholders. As far as intelligent
    grid solutions, whether it be smart metering
    as the starting point or other solutions,
    utilities have to leverage whatever
    piece they’re currently considering to
    its maximum value within the company.
    For example, if you’re installing smart
    metering primarily for automated meterreading
    capabilities, consider whether
    you can also use that meter information
    to identify outages. You’ll detect the outages
    much sooner and do a much better
    job of deploying field crews to make the
    repairs. It would take a very small additional
    investment to be able to leverage
    initial smart metering capabilities to support
    outage management like this, and the
    utility would realize some very significant
    benefits. There are a lot of opportunities
    like that in these intelligent grid solutions.

    E&U: How does the GridWise Alliance
    support the intelligent grid efforts?

    FH: The GridWise Alliance is an organization
    that has been together about five
    years. It is a collection of utilities as well
    as vendors primarily focused on effecting
    policy, legislation and regulations
    within the utility industry, at federal and
    state levels. We want to make sure that,
    whether it’s the Department of Energy,
    FERC, other regulatory bodies, or Congress
    that they’re putting legislation out
    there that advances the intelligent grid
    and the deployment of energy-efficient
    solutions. We’re also interested in having
    various companies like DOE provide
    the right type of research and programs
    to make the intelligent grid solutions a
    reality. We want to stay on top of those
    drivers and mandates to ensure they’re
    in line with where the distribution grid
    needs to go.

    Implementing the Right Network for the Smart Grid: Critical Infrastructure Determines Long-Term Strategy

    Energy Conservation. Energy
    Efficiency. Go Green. Clean Tech.
    Smart Grid. The utility industry
    has enacted a number of initiatives with
    a common goal – improving the quality,
    value and long-term sustainability of
    electricity delivery. Utility executives
    are being challenged to chart a course
    for the next century of utility services
    and prepare the grid for changing
    and often-unknown demands of their
    customers. Utility issues are moving
    beyond traditional revenue-cycle
    services to embrace energy efficiency,
    alternative generation and improve
    customer services.

    For the past 20 years, the industry
    has focused considerable resources on
    automated meter reading, primarily
    intended to improve the accuracy and
    cost of monthly revenue reads. Today
    focus has broadened to a number of
    related applications leveraging the
    same technology assets. Dynamic pricing
    programs hold great promise for
    flattening the load curve, but require
    more sophisticated and granular measurement.
    This expansion of demand response creates significant opportunities
    in both commodity hedging and
    customer services, but may also change
    the economics of distributed generation.
    Increases in distributed generation will
    have untold impact on distribution operations,
    expanding the need for distribution
    automation beyond the substation. The
    convergence of these varied and interrelated
    applications is creating exciting
    opportunities to reshape the nature of
    electric delivery.

    Utility leaders are developing comprehensive
    strategies to implement and
    support a variety of new applications that
    move well beyond meter reading. Understanding
    the cumulative requirements of
    these operational initiatives leads to the
    recognition that an advanced networking
    infrastructure is required to efficiently
    manage the many devices that create a
    “smart grid.” The right network brings
    smart devices “on line” and allows for
    real-time command and control of the
    entire distribution system. Figure 1 illustrates
    how an advanced utility network
    connects the components of
    a smart grid.

    Utilities’ ability to realize the vision
    of a smart grid is largely determined by
    their choosing the right network infrastructure:
    one that is functionally capable and cost-effective today, yet will support
    future (and often unknown) requirements.
    Advanced networking from the substation
    to – and into – the premise creates
    the fundamental platform on which smart
    grid initiatives are built. The right network
    infrastructure provides secure and seamless
    connectivity, supporting any utility
    application. Innovative, standards-based
    applications can leverage smart grid
    assets, turning new concepts into new
    products.

    Figure 1: The right network infrastructure is key to realizing the vision of the smart grid.The practice of a common network
    infrastructure supporting a number of
    applications is not new. For example,
    consider the Ethernet system installed
    in most offices. When the network was
    installed, it was intended to support a
    variety of applications, including email,
    Web browsing, video delivery and more.
    However, it was not necessary to decide
    up front all of the computers, printers or
    applications that would ever run over
    that network. By choosing a standardsbased
    network with the right performance
    characteristics, new technologies
    are easily and seamlessly incorporated
    onto the network.

    Utilities that make the right strategic
    decisions regarding the networking platform
    to deliver the smart grid will enjoy
    similar flexibility and business value creation.
    Failure to make the right networking
    choice may result in a utility’s future initiatives
    being hampered by the limitations of
    its network.

    Defining Your Smart Grid

    The first step in implementing the right
    network requires a utility to develop a
    strategic plan and define the technical,
    performance and price characteristics
    needed to support both current and future
    applications. Input from all areas of the
    utility operation should be included, as
    all departments are affected by – and can
    benefit from – the smart grid. Operational
    use cases that consider both current and
    anticipated needs should be reviewed,
    including representation from customer
    service, metering, distribution operations,
    information technology, revenue
    protection, regulatory and rate making.
    A considerable body of work is available
    to assist in this effort, including published
    documents from EPRI, GridWise and
    UtilityAMI.

    Open, Not Closed

    Next, technical requirements of the
    advanced utility network need to be
    defined to support the operational
    requirements of the business. Given
    the variety of devices and interrelations
    between several applications, use of
    single-purpose or proprietary networking
    should be avoided. Implementing
    these technologies increases complexity
    and cost, while simultaneously decreasing
    long-term flexibility. Standardsbased
    networking is the safest, surest
    route when specifying the network
    infrastructure of the smart grid.

    The dominant standard in networking
    is Internet protocol, known simply as
    IP. Beyond the World Wide Web, IP is the
    networking standard used in managing
    nearly all telecommunications, cable
    and information technology applications.
    Hundreds of billions of dollars in collective
    research and development make IP
    the gold standard for mission-critical
    networking around the world.

    IP addresses many of the challenges
    of running multiple applications and
    devices on the same network. The IP
    suite delivers proven technologies for
    addressing, routing, quality of service
    and a host of related networking functions,
    all demonstrated at scale. With IP,
    vendors can compete to develop best-ofbreed
    products for a variety of applications,
    yet share a common network infrastructure
    to minimize cost and complexity.

    Security Via Proven Technology

    Historically, security in many remote
    utility applications consisted solely of
    “obscurity” created through the use of
    proprietary products or the use of simple
    passwords that were rarely changed. The
    move to sophisticated command-and-control
    applications mandates significantly
    more proven and robust security across
    the entire grid.

    A number of proven IP security technologies
    (for example, IPSEC or SSL) are
    available to address this need. These
    technologies are widely used in a number
    of industries, including securing financial
    transactions over the Web, to manage a
    range of security concerns. Most importantly,
    these technologies are constantly
    improving due to the collective efforts
    of countless vendors. IP suite security
    technologies allow utilities to effectively
    address concerns of spoofing, denial of
    service and unauthorized access without
    the requirement of reinventing new technologies
    or relying on the efforts of
    a single vendor.

    Transport Independence

    Application vendors have historically
    been defined by the physical medium
    of their solution. A vendor might be
    “wireless” or “powerline,” but rarely
    both. Over the past few years, there has
    been a dizzying increase in the number
    of available carrier solutions, including
    WiFi, Wi-Max, Zigbee, Z-wave, DS2,
    Homeplug, a variety of cellular standards
    and more. Each of these “standards”
    (reflecting Layer 1 and Layer 2 of
    the ISO seven-layer stack) offers different
    advantages and disadvantages.

    A similar set of choices exists in enterprise
    IT infrastructure. For example, office
    computers may connect to the enterprise
    network either via WiFi or wired Ethernet.
    In the same way an enterprise can choose
    computers that connect using a variety of
    transports, it should be possible for a utility
    to choose the physical transport that
    best achieves the business case for any
    particular part of its service territory (for
    example, wireless for urban or suburban,
    powerline for rural). Such flexibility can be
    achieved by utilities in the same way that
    it is achieved by enterprises: by deploying
    standard, interoperable, IP-based products
    rather than proprietary, transportspecific
    ones.

    Performance: Band width and Latency

    Two critical issues when developing the
    technical requirements of the smart grid
    network are those of available bandwidth
    and latency.

    Bandwidth is the volume of data per
    unit time that can move through the
    network. The first step in determining
    your requirements is to scope how much
    data is required from each device and
    application. Add them up on a timesensitive
    basis. Where are the peaks?
    Where does it break? Does the solution
    allow you to add bandwidth if needed in
    the future?

    Although smart grid applications generate
    significantly more data than monthly
    meter reading and historic low-density,
    one-way demand response applications,
    it is not a large volume of data relative to
    modern IT systems. Take the very familiar
    example of a manual meter read: Manual
    meter reading generates approximately
    30 bytes per month of data, per customer.
    A new smart meter, collecting multichannel,
    15-minute interval data with event
    logs, security logs, power quality and
    other measures might generate over
    10,000 times as much – perhaps 50 to
    60 KB of data per meter, per day.

    In an environment of applicationspecific,
    low-bandwidth solutions, it may
    seem that this much data could never
    be supported, nor would ever be necessary.
    The history of networking, however,
    suggests that if additional data is made
    available, customers will always find a
    use for it. For example, over time, public
    websites have migrated from serving
    small text pages of a few KBs, to rich
    graphical pages of hundreds of KBs, to
    streaming audio and video at tremendous
    data rates. These innovations are made
    possible by the availability of economical
    but rapidly growing Internet bandwidth.

    Solutions from the traditional IP-based
    networking world, deployed in utility
    networks today, render it possible for
    utilities to move and manage much larger
    amounts of data than heretofore possible.
    It is no longer necessary for utilities to
    constrain their business operations –
    or for that matter, their imaginations –
    based on the limitations of their vendors’
    technology.

    Latency is equally important. Many
    smart grid applications, including distribution
    automation, outage alarming and
    load control signaling, require very low
    latency, while others, such as metering,
    are more latency-tolerant. Smart grid networking
    needs to support end-to-end and
    device-to-device latencies not of minutes
    or hours, but of seconds and milliseconds.
    To manage traffic appropriately, networking
    technology must support message
    prioritization, allowing critical, latencyintolerant
    messages primacy to other
    network traffic. For example, meter- reading
    acquisition is generally expected only
    within a time window measured in minutes
    or hours, while some DA applications
    require that remote devices talk “across”
    the network (without routing through the
    back office) in less than a second.

    Cost/Performance Balance Drives Value

    Hardware economics historically limited
    the deployment of multi-application
    networks, forcing utilities to implement
    vertically integrated, application-specific
    solutions. While these solutions often
    solved immediate needs, they also created
    significant back-office integration
    issues, increased operational complexity
    and increased long-term costs. Many utilities
    hoped that solutions offering greater
    bandwidth, such as broadband over
    powerline (BPL) would address this need.
    Although BPL delivered strong functional
    performance, the infrastructure costs
    were measured in hundreds of dollars per
    home passed, far exceeding the value to
    be gained from utility applications alone.

    Current hardware economics now make
    it possible to deliver high-bandwidth,
    low-latency networking at reduced cost.
    It is now possible to deploy a systemwide
    networking infrastructure delivering hundreds
    of Kbps and sub-second latencies
    at a fraction of the cost of broadband.
    Specific pricing varies based on utility
    specifics, but can typically rival traditional
    AM R/AM I network pricing. This results in
    the utility realizing significantly greater
    benefits from a variety of applications
    while simultaneously saving 30 to 40 percent
    in operational costs versus operating
    separate communications solutions for
    each application.

    Build It Right – Not Over

    Less than 20 years ago, laptops, ubiquitous
    cell phones, iPods and Xboxes
    were not in existence. Considering the
    emergence of new utility applications
    and devices, it is hard to imagine what is
    to come in the next five to 10 years. Even
    today, there is an explosion of new utility
    and consumer devices, including remote
    controllable thermostats, consumer-based
    energy storage appliances, customer displays,
    fault indicators, distribution automation
    applications and more. Regardless
    of the applications or devices that emerge,
    a standards-based network ensures that
    they can easily be incorporated into the
    smart grid. As exhibited in other industries,
    including cable, IT and telecom, a
    robust and flexible network is the basis
    for competitive advantage.

    An IP-based network allows a utility
    to network devices not yet invented,
    if they are built on IP. Product development
    cycles for devices are much faster
    than the life cycle of the network, so
    one must expect new devices will
    become available and utilities will need
    to connect them.

    The Right Network

    Utilities around the world are now leading
    the drive to capture energy efficiency as
    the “fifth fuel.” Smart grid applications
    including advanced metering, demand
    response, distributed generation and distribution
    automation offer utilities all the
    tools to capture this value.

    By specifying and implementing the
    right networking infrastructure, utilities
    are building a strategic technology platform
    that enables a wide range of policy
    and business initiatives for years to come,
    avoiding concerns of near-term obsolescence
    or functionally bridling technologies.
    IP-based networking is really the
    only choice when building the network
    infrastructure for the future.

    Challenges of Demand Response and Distributed Generation

    Project Introduction and Motivation

    Electricity system challenges have
    emerged in Washington’s Olympic
    Peninsula that make it an ideal
    laboratory for advanced intelligent utility
    network (IUN) experiments. The peninsula’s
    transmission feed is approaching
    capacity as a result of population growth,
    but there are significant concerns over
    siting additional transmission or central
    generation assets there due to the ecologically
    sensitive nature of the Olympic
    National Forest, which comprises much
    of the northern portion of the peninsula.
    In 2001, in response to that situation,
    Bonneville Power Administration (BPA)
    began their Non-Wires Solutions program
    of demand response (DR) and distributed
    generation (DG) experiments and pilots
    with the goal of minimizing the environmental
    impact of new transmission and
    generation construction.

    The Pacific Northwest GridWise™ Testbed
    program, part of the Department of
    Energy’s GridWise™ initiative, was established
    in 2005 to coordinate technology
    projects supporting the GridWise™ vision
    of a transactive electric infrastructure
    leveraging advances in information technology
    and communications. The goals
    of GridWise™ and the challenges faced
    by BPA led to the creation of the Olympic
    Peninsula Project as part of the Testbed
    program. This project combines DR and
    DG control, using a real-time retail market
    clearing mechanism to determine the
    optimal balance of DR and DG dispatch
    in response to the current state of the
    system.

    The yearlong Olympic Peninsula Project,
    which concluded in March 2007, was
    a partnership involving a number of organizations:
    Pacific Northwest National Laboratory
    (PNNL), IBM Research, Bonneville
    Power Administration, Invensys Controls
    and several public and private utilities
    in the region. PNNL, working with BPA,
    defined the project and acted as the overall
    project manager. IBM Research was
    responsible for the system integration of
    the various components with the market
    clearing system developed by PNNL.
    Invensys Controls provided the residential
    equipment, including communicating
    meters, programmable thermostats, load
    control modules for water heaters and
    broadband gateways for communication.

    Design of the Experiment – Participants and Structure

    The Olympic Peninsula Project involves
    about 120 residential customers and several
    commercial/industrial customers.
    The residential customers are divided into
    several contract types and a small control
    group. The contract types are fixed price,
    time of use/critical peak price (TOU/CPP)
    and real-time price (RTP). One of the goals
    of the project is to compare the performance
    of the three contract types, with
    particular emphasis on the RTP versus
    TOU/CPP comparison. The hypothesis
    being tested is that a properly designed
    and implemented RTP system will be more
    effective at managing Distributed Energy
    Resources (DER) such as DR and DG than
    a TOU/CPP system.

    Rather than work through the process
    of establishing experimental tariffs for
    the residential participants, PNNL implemented
    a shadow billing system with
    an online payment portal. Participants
    receive their normal utility bill based
    on actual consumption, and pay it as
    they ordinarily would. They receive an
    additional online bill for the experiment
    that represents their consumption from
    a virtual distribution feeder. That bill is
    paid via the online portal using funds
    seeded into an account at the start of
    each quarter, so customers experience no
    additional out-of-pocket expense (part of
    the project budget was reserved for this
    purpose). The amount of funds that are
    seeded is based on historical consumption
    for each customer.

    Fi gure 1 Summary of Olympic Peninsula Project ComponentsTo give the experiment some level
    of real-world motivation and benefit in
    exchange for being responsive to critical
    situations or high prices, PNNL refunds
    any balance remaining at the end of each
    quarter back to the residential customer.
    Therefore, the more aggressively a customer
    responds, the larger his refund
    check will be. The experiment has been
    designed to allow the most aggressive
    participants to receive on the order of
    $150 – roughly what it might cost to take
    a family out to dinner, for example.

    The virtual distribution feeder is implemented
    with feeder modeling software at
    PNNL using the real consumption information
    being collected. This allows all the
    participants to appear as if they’re on
    the same feeder, and that feeder can be
    artificially constrained to create critical
    situations as part of the experiment.

    For the residential customers, the
    DR assets are the heating and cooling
    systems and the water heaters. The
    physical control components in the
    homes have been provided by Invensys
    Controls, and, in the case of the TOU/CPP
    contract homes, the Invensys GoodWatts
    pilot system is being used to implement
    the control management. Homeowners
    can set up occupancy profiles and timeof-
    day schedules based on the predetermined
    time-of-use tariffs. Critical peak
    price events are scheduled as part of
    the experiments based on expected constraint
    times.

    Fixed-price contract homes do not
    have any DR automation, but homeowners
    do have access to the same Web
    portal as the TOU/CPP and RTP contract
    homeowners, showing up-to-date consumption
    information. Their equipment
    also has the ability to help them manage
    reduction of electricity consumption
    through mechanisms such as night-time
    thermostat set-backs, and part of what’s
    being observed is how well those homeowners
    reduced their overall load.

    The DR assets of the RTP contract
    homes, as well as the DG units, are
    managed by their participation in an
    artificial real-time market that clears
    every five minutes. The market incorporates
    information from multiple sources,
    including the base transmission capacity
    and price (provided by a live Dow Jones
    feed of the wholesale price), virtual
    distribution feeder capacity and actual
    distribution cost, the current non-curtailable
    load, the current curtailable load
    and corresponding buy bids, and the
    current dispatchable DG capacity and
    corresponding sell bids submitted to the
    market on each cycle. From this information,
    the market determines a new clearing
    price for retail electricity every five
    minutes, and that clearing price is communicated
    to the DR and DG assets. Each
    of those assets then responds based on
    its most recent bid and the parameters
    defined by its owner.

    Implementing the System Using Event-Based Middleware

    The most innovative functionality in the
    Olympic Peninsula Project is the market-
    based control of DR assets in the
    RTP contract homes. The experiment
    requires each programmable thermostat
    to understand how to create a buy
    bid into the real-time market based on
    homeowner comfort goals, the current
    state of the device (e.g., the temperature
    in the home) and the current trends of
    the market. Note that the water heaters
    are not bidding into the market; they are
    using an open-loop control scheme in
    which they only respond to price signals
    from the market. Both the thermostats
    and water heaters then have to respond
    appropriately to the clearing price generated
    every five minutes before the cycle
    starts again. (The DG assets are also bidding
    and responding but using a different
    software and communication approach,
    so they are not discussed here.)

    The design and implementation of this
    part of the system is based on a prototype
    event-based programming framework
    called Internet-scale Control Systems
    (iCS) developed at IBM’s T.J. Watson
    Research Center. iCS is an example of
    what is currently emerging in the academic
    and research community under
    the category of cyber-physical systems,
    reflecting the increasing importance of
    monitoring and managing the physical
    world in a much richer and more detailed
    fashion. The physical world, in this case,
    is the electric grid and its associated enduse
    loads (such as heating and air-conditioning
    systems).

    In practical applications, there is an
    additional aspect that needs addressing.
    Cyber-physical systems will almost always
    operate within the context of some business
    environment. In the Olympic Peninsula
    Project, the business environment
    is reflected in the use of a market-based
    control scheme and the need for devices
    to be virtually augmented with business
    domain awareness – they need to bid into
    the market and react to clearing events.
    For this reason, IBM Research is focusing on an expanded scope of investigation
    referred to as cyber-physical business
    (CPB) systems, which includes both
    discrete, event-based systems and continuous
    data stream-based systems. The
    objective is to define a middleware framework
    that addresses solutions, such as
    intelligent utility networks, which involve
    the integration of the physical operations
    domain with the business domain, deals
    with the challenges of a highly distributed
    and heterogeneous environment, and also
    reflects the time-dependent nature of
    such integrated solutions.

    By using a loosely coupled event-programming
    approach with a very simple
    programming model, iCS enabled all
    the components of the system to be
    abstracted and represented as simple
    sensor, actuator or control objects in
    the market-control application – from
    the market itself all the way down to the
    heating element relays and load-monitoring
    sensors on the water heaters. The
    market-awareness and bidding/response
    algorithms were added in the middleware
    framework without modification of the
    devices themselves, essentially creating
    new, more intelligent virtual devices from
    the perspective of the market. This also
    allowed the design to be easily adapted
    in response to requirement adjustments
    or other issues that surfaced during
    development. In addition, iCS enabled the
    overall market-control application to be
    structured as a layered set of hierarchical
    control loops linked together through the
    event communication model.

    Another benefit of using a loosely coupled
    event-based approach is the level of
    resiliency it affords. The system was implemented
    so devices fall into a safe degraded
    mode if there is any loss of communication
    or failure in the market clearing signal.
    Operations continue in a less-than-optimal
    mode, but there is no catastrophic failure.
    Once the problem is resolved, the system
    returns to its optimal state.

    Early Results

    In spring 2007, the Olympic Peninsula
    Project is just completing the region’s
    winter electricity-constraint season.
    Several cold periods occurred, and the
    real-time market system operated as
    designed. It limited total load on the virtual
    distribution feeder to the configured
    capacity through price-based DR management,
    and it dispatched DG units when
    the market price for electricity met the
    sell bids. When no constraints existed,
    the market price was stable and in the
    expected range. In terms of performance,
    it appears there was more optimal shifting
    of load with the RTP contracts than with
    the TOU/CPP contracts.

    Another intermediate observation
    was that the fixed price contract homes
    sometimes shifted more load than the
    TOU/CPP homes. That may have been an
    artifact of the small sample size and the
    individual homeowners involved, but it
    is still an important result in that it indicates
    customers will take advantage of
    well-designed energy management tools,
    and may have further been motivated
    by having real-time information sources
    about their consumption and cost easily
    accessible.

    An additional positive indicator surfaced
    as a result of extending the project
    from its original end date of September
    2006 to March 2007: Most of the residential
    participants agreed to continue their
    involvement, presumably because they
    were satisfied with the system’s positive
    impact on their electricity consumption
    and, to some extent, on their overall electricity
    cost.

    Conclusion

    The Olympic Peninsula Project has
    already succeeded in a number of dimensions,
    from demonstrating the effectiveness
    of a cyber-physical business system
    approach in designing such solutions, to
    generating a great deal of interest in and
    awareness of the optimal management of
    combined demand response and distributed
    generation in dealing with a variety
    of issues in electric grid operations. The
    goal of using this project as both an
    experiment and an educational tool is
    already starting to be realized.

    The project also represents an important
    milestone in the realization of the
    GridWise™ vision of an intelligent utility
    network – by integrating utility and customer
    assets; by leveraging advanced
    information technologies; and by bridging
    the operations and business domains of
    the utility environment. Further, it has
    been an extremely successful collaboration
    of industry and public organizations,
    including equipment and information
    technology vendors, investor-owned
    utilities, public utility districts, and the
    Department of Energy’s National Laboratory
    system.

    Application and Design of the Modern Substation Automation Platform

    The Traditional Transmission Substation Automation SystemThis paper describes the application
    of the modern automation
    platform in electrical transmission
    substations. The applications described
    within are not “bleeding edge” or merely
    theoretical. Working, proven examples
    of everything described in this paper can
    be found in at least one U.S. or Canadian
    substation, but few substations include all
    platform applications.

    The modern automation platform
    consolidates solutions previously implemented
    in multiple physical boxes from
    different suppliers. A single automation
    platform can perform the roles of security
    gateway, SCADA RTU, communication
    processor, port switch, protocol
    converter, sequence of events recorder,
    alarm annunciator and substation HMI.

    Economic, technological and cyber
    security forces have driven this functional
    consolidation. Figure 1 depicts the
    many components of a traditional transmission
    substation automation system,
    while Figure 2 illustrates the configuration
    of the modern automation platform.
    Having fewer boxes reduces hardware,
    purchasing, installation and maintenance
    costs, while having a single configuration
    tool reduces training and configuration
    costs. Thanks to technological advances,
    powerful yet inexpensive processors can
    now perform complex data processing
    and display tasks in the tough substation
    environment without a fan, hard drive
    or other moving parts. Modern software
    development tools and operating systems
    are also ideally suited for the embedded,
    real-time environment. Therefore, technology
    is no longer a barrier to entry for
    niche vendors.

    The Many Roles of the Modern Automation PlatformFinally, since substation security functions
    are easier to manage through a
    single gateway, the centralized topography
    enabled by the automation platform
    allows a simpler and more secure
    approach to cyber security.
    Each of the roles of the modern automation
    processor is described below:

    Security Gateway

    The specialized security functions
    required in transmission substations
    include:

    • Firewalls;
    • Data encryption and VPNs;
    • Local and centralized authentication;
      and
    • Event and alarm logging.

    Secure physical routers/switches from
    traditional IT suppliers and security
    software running on PCs have traditionally
    provided a solution. However,
    transmission substation engineers prefer
    not to purchase and maintain separate
    IT-oriented boxes, some of which may
    not meet all substation environmental
    specifications and may require specialized
    and extensive training. To resolve
    this situation, the automation platform
    has taken on these security functions as
    resident tasks, with functions optimized
    and simplified for substation automation
    applications.

    Traditional SCADA RTU Functions

    The traditional substation remote terminal
    unit (RTU) has performed its task
    reliably in transmission substations since
    the advent of computers, reporting realtime
    data to one or more Masters and
    executing control commands. The challenge
    has been to get the RTU to do more
    than it was designed to do. Increasingly,
    users need to obtain SCADA data from
    IEDs using IED protocols, some of which
    are proprietary and require specialized
    knowledge to emulate. Users also need to
    move data securely from the RTU to nontraditional
    users. The modern automation
    platform performs all of the traditional
    RTU tasks plus these new tasks in a single
    package.

    Communications Processing

    Communications processing includes the
    tasks of protocol conversion and media
    conversion. These have often been handled
    by specialized and separately powered
    hardware boxes. Examples include
    separate bit-to-byte converters, legacy
    protocol to DNP3.0 converters, RS232-to-
    RS485 and serial-to-fiber converters and
    Ethernet-to-serial converters. However,
    all of these options increase cost, complicate
    automation system design and
    reduce system reliability.

    Alternately, the modern automation
    platform performs these functions with a
    modular software and hardware architecture
    that permits any Master or Slave protocol
    to be configured on any combination
    of built-in bit synchronous, RS232, RS485,
    fiber optic or Ethernet ports. The resulting
    design is cleaner, more affordable and
    more reliable.

    The modern automation platform also
    enables data to be routed in nonconventional
    ways, such as Master-Master (using
    the automation platform as a mailbox)
    and Slave-Slave (reading data from one
    Slave and writing these data to a second
    Slave).

    Distributed I/O With Accurate Time

    Most traditional RTUs have required
    all substation inputs and outputs to be
    wired back to a centralized location,
    particularly if the inputs needed to be
    time-stamped to an accuracy of one millisecond.
    Today high-performance distributed
    I/O modules with IRIG-B timecode
    formats can be mounted on substation
    breakers and transformers, with a
    single fiber optic connection routed back
    to the substation RTU or automation platform.
    Some distributed I/O designs can
    also synchronize I/O module time with
    IRIG-B being transmitted over the serial
    connection to the module, or using NTP
    over Ethernet, to further reduce IRIG-B
    wiring costs.

    Accurate Time Management

    Accurate time-formatting (such as IRIGB)
    and accurate time-stamping of events
    have been part of transmission substan
    tion automation design for at least 15
    to 20 years. But there has been little
    coordinated and integrated management
    of time stamps from multiple IEDs in
    multiple protocols and to multiple Masters.
    For example, time-stamped events
    in proprietary relay protocols need to
    be transmitted to the SCADA Master in
    the SCADA protocol, typically as DNP3.0
    events with time. In addition, IEDs that
    communicate using protocols that do not
    support time need to have a time stamp
    placed on events as they are received.
    Multiple SCADA Masters and HMIs may
    each require the same events with time
    but possibly in a different protocol. The
    modern automation platform handles
    these time management functions as
    standard routines.

    Local Logic Processing and Control

    Local logic processing and control for
    schemes such as breaker-and-one-half,
    synchro-check, breaker failure, underfrequency
    load shed and black start have
    traditionally been implemented in protective
    relays and dedicated programmable
    controllers. While this is still the case, the
    trend is toward less logic implemented in
    programmable logic controllers and more
    in the automation platform. The math
    and logic packages available in the automation
    platform are based upon open
    and well-supported tools such as Visual
    Basic and IEC 61131. Many logic tasks
    that require data from different parts of
    the substation are handled easily by the
    automation platform, as it is the central
    repository for all substation data.

    Protective Relay Record Management

    Traditionally it has been common to apply
    a separate communication processor or
    port switch to provide access to engineering
    data residing in substation protective
    relays. The tools to access and process
    records have also traditionally resided
    on remote PCs. Today the automation
    platform serves as a pass-through port
    switch, supporting dial-in connections
    as well as local and remote Ethernet
    connections. The automation platform
    can also filter and preprocess relay fault
    records, so that only pertinent records
    are retrieved for analysis (for example,
    filter-based upon fault distance) or to
    simplify analysis (for example, pulling
    apart separate relay events from one
    large flat “file” and integrating with
    record viewer packages).

    Substation HMI

    Substation engineers have always desired
    local tabular or graphic visualization of
    real-time operating conditions in the
    more critical transmission substations.
    The challenge has been to justify the
    relatively high cost of buying the HMI
    software, configuring the displays and
    maintaining the PC-based hardware
    platform and operating system. Users
    report that the highest single cost in
    substation automation is the PC-based
    HMI. Some of the PC hardware issues
    have been addressed with the advent of
    rugged PC power supplies and solid-state
    hard drives, but other costs remain. The
    modern automation platform can serve
    up standard preconfigured Web pages
    to a hardened PC or a laptop, reducing
    software configuration costs. Some automation
    platforms can also support video
    output of Web pages, allowing the use of
    a hardened LCD panel instead of a PC and
    further reducing hardware and maintenance
    costs.

    Alarm Annunciation

    Separate hardwired or serially driven
    alarm annunciator panels have been
    commonly applied in substations to alert
    operators of abnormal conditions. These
    hardware displays can be replaced with
    software-driven displays that are generated
    and served out from the automation
    platform and viewable as a Web page on
    any PC or as video on an LCD monitor.
    Displays can be identical to the old hardware-
    based displays or can change based
    upon real-time operating conditions.

    Sequence of Event Recording

    Dedicated sequence of event (SOE)
    recorders have been applied to determine
    the precise sequence of operation of substation
    equipment before, during and after
    substation events, and to verify proper
    operation (to a typical resolution of one
    millisecond). The modern automation platform,
    combined with high-performance
    distributed I/O, can replace the dedicated
    recorder with preconfigured Web pages
    served out to local or remote PCs. Savings
    include reduced wiring costs and reduced
    training and configuration time (same
    configuration tools and database as RTU,
    alarm annunciation and other functions).

    The Future

    Economic, technological and security
    forces will continue to drive the development
    and application of the single, powerful
    automation platform in transmission
    substations. Designs will become easier
    to configure and commission, and will
    take on new software tasks to convert
    data into information, and information
    into better operating decisions. These
    efficiencies will benefit all utilities, as
    they are challenged to contain costs
    without sacrificing service.

    AMI = Smart Meter + Smart Customer + Smart Utility

    The business world seems to be all
    about “smart” these days. You
    have smart cars, smart money,
    even smart mobs. The utility industry
    is no laggard in this wave of smartness.
    “Smart meters” is one of the current
    buzz phrases that quickly invoke discussions
    about “smart grids” and eventually
    “smart customers.” All this is via the
    smart technology known as advanced
    metering infrastructure (AMI).

    But smartness will not come cheap.
    The investment in AMI is significant;
    north of $130 per customer at current
    market costs. The business cases for
    such projects focus heavily on the meter
    – what type with what functions; the
    network required to collect all the data
    and what systems; new and current ones
    modified that are required to process the
    tidal wave of data AMI will generate. The
    benefits are no more meter readers, far
    more efficient operations and customers
    who demand responsive.

    Today it is estimated that only 10 percent
    of U.S. customers have some form
    of automated meter reading device. And
    although that number will change by a
    healthy level in the next five to seven
    years, widespread implementation of AMI
    is still in the planning and pilot stages at
    many utilities. This is not out of desire to
    implement AMI but because the business
    case does not balance.

    However, the industry has an opportunity
    to change the balance of cost-tobenefit
    in their business cases. By aggressively
    adopting open standards, the cost
    of AMI meters can be reduced significantly
    and the applications that “harden” some
    of the “softer” benefits in business cases
    can be effectively developed. The fundamental
    shift in thinking that is required is
    the realization that AMI is not a meter system
    replacement. It is transformational. It
    is a classical innovators’ dilemma. AMI is a
    computer and communications system. It
    is hardware and software. Forget meters.
    Think computers. Think applications!

    Let’s first take a look at the key components
    of AMI – the smart meter, the
    smart customer and the smart utility
    – and then consider the challenges this
    vision presents.

    Smart Meter

    AMI starts with a smart meter. The frequent
    question is, how smart? Is an AMR
    meter with some application integration
    in the back end sufficient?

    AMR automates the collection of meter
    reads. No more meter readers doing their
    appointed rounds on a monthly cycle.
    Instead trucks make the rounds at up
    to 35 miles per hour and AMR meters
    transmit their reads. Or a fixed network of
    varied technologies collect the reads, perhaps
    more frequently than monthly, but
    all to the same end – replace people with
    automated meters.

    But consider the AMI vision. Your
    electric meter is really “smart”; it has an
    embedded computer; it tracks and stores
    all the vital information about your energy
    use; and it communicates your usage and
    other interesting information back to the
    utility on demand. It will send out a distress
    call to the utility when the power is
    out. It can connect to an in-home, IP-based
    LAN that communicates with all your
    appliances, monitoring them and, in times
    of power shortages, can even change the
    operations of these appliances.

    AMI requires a meter that is functionrich
    and able to provide information in
    near-real time. It needs to be two-way,
    upgradeable, programmable and extensible.
    This ups the ante when the decision
    as to what type of meter and how smart it
    needs to be is considered. Accordingly, it
    significantly increases costs.

    Most business cases treat the AMI
    meter as 20-year utility property. With
    over 75 percent of the AMI cost in the
    meter, its associated network and its
    installation cost, changing out this infrastructure
    in a near- or medium-time
    horizon will be a financial problem to the
    business case. A five-year horizon like
    that used for computer equipment would
    quickly sink most business cases.

    Nonetheless, conceptualizing the meter
    as a computer versus a utility meter is the
    defi ning change in perspective required
    for achieving AMI benefi ts (see Figure 1).
    And with this realization enters the opportunity
    for the fundamental, aforementioned
    shift. The AMI meter is a computing
    platform and thus the value comes from
    the applications it supports – not just the
    functions it replaces or automates. These
    are applications that support the smart
    customer and the smart utility.

    The Smart Customer

    For AMI benefi ts to be realized you need
    a “smart customer.” A smart customer is
    both informed and empowered in regard
    to their energy use; informed because
    they know their energy use and its associated
    cost over time and how that energy
    use is derived from all the energy-consuming
    devices in their premise.

    Moreover, a smart customer is an
    empowered customer. It does little good
    for a customer to receive hourly or even
    sub-hourly energy and cost information
    if they have no idea what to do with the
    information. They need access to a set
    of easy-to-use applications that allow
    them to visualize the cost of their energy
    behaviors and to make informed decisions
    on how to change those behaviors.
    These applications need to have the ability
    to perform what-if analyses to support
    customers’ energy and cost-effi cient
    behaviors. And even more ambitiously,
    the AMI system needs to deliver real-time
    price signals that alert customers of highcost
    conditions and that allow utilities
    to send signals directly via a meter connected
    to a customer-premise LAN that is
    able to control customer energy-consuming
    devices.

    The Smart Grid

    Utilities need to be smart for the business
    vision to work.

    Utility support for operations in today’s
    world is basically forensic in nature. Take
    outage management, for example. Generally
    the utility waits until enough customers
    call to report an outage to triangulate
    on the source of the problem and begin
    restoration operations. In the case of a
    grid component failure such as a transformer
    explosion, they wait until the
    explosion happens, usually unaware of
    its impending failure. Why? Because they
    have generally no telemetry installed
    to give them the required information
    before the customer calls or the transformer
    blows.

    But with AMI meters deployed to all
    customers, the utility gains the ability
    to continuously monitor the operational
    status of the network at the end points.
    Network component failures are instantaneously
    observable and can be isolated
    directly to the limited portion of the grid
    impacted. During an outage, the utility
    already knows a customer’s power is out
    and is able to provide a timely and accurate
    estimate for the time of restoration
    when a customer calls.

    On an ongoing basis, the history of
    key metrics such as distribution loading
    – from which key operational metrics of
    transformer performance can be induced
    – are collected and analyzed. Predictive
    maintenance and the replacement and
    upgrade of network devices are managed
    more effectively and effi ciently. The
    distribution grid is managed predictably
    instead of forensically because the AMI
    system can operate as a proxy for many
    of the historical functions of OMS and
    SCADA.

    Challenges

    If this compelling vision of the benefi ts of
    AMI did not have its challenges, the current
    less than 10 percent penetration rate
    of AMI in the U.S. would be much higher.
    It turns out the challenges are formidable.

    The first challenge is the cost for the
    function required. AMI meters are very
    expensive relative to their predecessor, the
    electromechanical meter. Clearly, a meter
    that is much “smarter” and more function-
    rich than an AMR meter is required
    to realize the benefi ts of AMI. Such AMI
    meters exist in the market today and are
    generally too expensive to be deployed
    on a large scale to all customers. A signifi –
    cant element of the cost is in its research
    and development. As R&D is amortized
    and manufacturing capabilities grow and
    mature, one can expect the cost to decline,
    which has already occurred over the past
    few years. But for AMI meters to reach
    the near commodity status of today’s
    electro mechanical meters, the industry
    needs take a page out of the computer and
    telecommunications industries’ playbook.
    They need to embrace standardization
    and openness in this technology. AMI is,
    after all, a computer and communications
    network, and the recent history in both
    industries is a dramatic example of what
    open standardization can do to drive down
    costs and deepen functionality.

    The second challenge focuses on the
    set of applications required to realize
    both the smart customer and the smart
    utility. These are customer applications
    that can provide the information that customers
    will access to become informed
    and empowered. These are also utility
    applications and analytics that support
    operations, as well as the integration of
    these applications into the operational
    processes of the utility. Both sets of
    applications require a fundamentally different
    view within the utility in regard to
    the basic systems and business process
    supporting customer information and
    utility operations.

    Implementing these systems both
    through new systems and modification
    and adaptation of current systems is
    complex. Most business case analysis
    focuses on the obvious issue of scalability
    due to the large increase in raw data
    processed within the utilities systems.
    But the problem is much more pervasive
    than data volume. The integration
    required between batch-orientated billing
    systems, real-time outage systems and
    near-real time customer information analytics
    applications – all utilizing the same
    data stream from the same AMI system
    infrastructure – is as complex and sophisticated
    as any system the industry has
    yet implemented. The important starting
    precept is that the AMI system is a complex
    collection of multiple, interrelated
    computer and communications systems,
    not all of which are operating at the same
    temporal cadence. Effective integration
    of these systems is the controlling problem
    impacting successful implementation.

    The process of planning to add a new
    meter data management system that connects
    to the AMI network, and connecting
    it to the current utility systems through
    currently employed techniques, will more
    than likely deliver sub-optimum results.
    AMI systems cannot be glued together
    in some gerrymandered architecture. A
    much more robust, flexible and extensible
    architecture is required. Again, as with
    the meter, standardization and openness
    is an absolute requirement. Building
    systems around Web services and
    SOA architectures is already adopted by
    industries with the same fundamental
    problem of building similarly complex and
    extensible systems. Only through such
    open and standard state-of-the-art architectural
    approaches will AMI applications
    be successfully and effectively integrated
    to perform all the functions required to
    achieve AMI benefits.

    Adopting open standards for both the
    meter and customer and utility application
    development will drive the cost of
    AMI implementation down dramatically
    and increase the realizable benefits to
    both the customer and the utility. It will
    also enable the level of application innovation
    required to deliver the benefits
    of AMI. AMI champions need to focus on
    computer systems and applications – and
    not on the replacement of electrical measuring
    devices – to be successful.

    It’s the smart thing to do!

    Fast Answers Through Embedded Business Intelligence

    Utilities have long been on a
    quest for better ways to handle
    and make sense of data. Few
    other industries need control over such
    huge data volumes simply to handle
    the everyday aspects of their business
    – locating assets, dealing with customers
    and their consumption, directing field
    crews, repairing outages and addressing
    ever-increasing requirements for
    efficiency, cost control and community
    contributions.

    It is not surprising, then, that utilities
    were prime candidates for the earliest
    forays into business intelligence.

    Initial, Partial Success

    In the 1980s, reporters dominated the
    scene. Though seemingly straightforward,
    they proved difficult to use. IT experts had
    to not only translate business requirements
    but also to know precisely what
    data was stored, where it was stored,
    how the value was populated and what
    caused it to change. Coding was an IT art
    of the highest order, requiring specialized
    knowledge of proprietary tools that were
    time-intensive and laborious. Thus, information
    users could not write or modify reports and had to wait months for IT to
    make even high-priority changes. That
    essentially guaranteed that any information
    gleaned would be out of date.

    Utility business intelligence made a
    major leap forward with the development
    of knowledge warehouses and data
    marts. These hubs of mined information
    from varying data sources were generally
    separated from the production environment
    so as not to risk the integrity of live
    data or hinder performance of production
    systems. They also permitted online
    analytical processing (OLAP) – fast, interactive
    and, above all, multidimensional
    information access that facilitates analysis
    via presentations like the Balanced
    Scorecard. Data mining technology took
    us a step further in helping users to find
    patterns in large amounts of data.

    As with report writing, however, data
    warehouse implementation generally
    required lengthy analysis, programming
    and setup – complex tasks with which few
    in-house IT experts could cope. Options
    and compromises were many and decisions
    hard to come by. Many companies
    became mired in “analysis paralysis,” and
    many projects never produced concrete results. As a consequence, most organizations
    attempting to implement data warehouses
    hired busloads of outside experts
    from the major consulting firms – for
    months or even years at a time.

    Even when experts implemented data
    warehouses, users still faced the same
    problem they had had with report writers:
    They could not make changes or
    ask today’s most relevant questions. So,
    few of these early projects succeeded in
    attaining the business decision improvements
    hoped for. Some got partway to
    their goal, if only for a short time. Many
    declared victory and then sank beneath
    the waves of an annual report’s footnotes.
    And data remained isolated within its own
    individual departmental silos.

    New Approaches

    In the 21st century, applications vendors
    have begun to build a new road to business
    intelligence. Its raw materials are:

    • New delivery techniques, especially
      Web-based portals and the technology
      that lets users easily customize them.
    • Pre-built extracts from common commercial
      packages such as ERP, CIS, etc.,
      that require less specialized expertise.
    • Aggregation, bit mapping and other
      OLAP optimization techniques built into
      standard database products, making
      them ubiquitous and easier to use.
    • Data warehousing techniques that
      support real-time or near-real-time
      data feeds, effectively supporting
      the analysis of rapidly changing
      operational data.
    • Standards that allow simple interaction
      between the OLAP results and business
      processes on the front end, closing the
      loop between measuring results and
      improving processes.
    • Perhaps most important, tools that
      bridge the gap between the business
      and IT departments,requiring less
      “translation” between roles and allowing
      each to work in their own specializations
      to get the job done.

    Results Out of the Box

    The result of this initiative is embedded
    business intelligence – solid, quick
    and easy analytical capability across an
    enterprise. This new approach, generally
    designed with a specific industry in
    mind, automatically collects, stores and
    analyzes data within a live application
    environment, hence the common term for
    it, “embedded BI.” It permits users – from
    the CEO down – to get the data they need
    presented in virtually any manner on a
    regular basis or in response to on-the-fly
    queries. It draws together information
    from different databases and presents
    results in easily accessible, graphical
    forms. As a result, managers and staff
    at any level can perform the updates,
    historical comparisons and cross-organizational
    comparisons that enhance
    real-time decision making. And, though
    out of the box and easy to upgrade, these
    products still have the flexibility to permit
    sophisticated customization where
    required.

    Hesitation

    Embedded BI is not a complete replacement
    for the large, complex BI projects
    custom-built from vendor-provided tools,
    which can:

    • Draw data from a larger variety of
      sources;
    • Respond to an organization’s specific
      needs; and
    • Use vocabulary and other approaches
      unique to the organization.

    But the cost of these customized solutions
    is very high – easily an order of
    magnitude higher than the typical cost
    of embedded approaches that are linked
    to a specific application. And, over time,
    even the simplest and most straightforward
    custom applications tend to become
    burdened with modifications to accommodate
    the demands of individual users.
    These changes may seem logical, particularly
    those that minimize the time users
    must work with the application before
    getting their results, but as they accumulate,
    they lower system performance
    and degrade overall processing. The outcome
    is frequently the BI “Franken-App”
    – complex solutions that become increasingly
    difficult to maintain and change
    over time.

    Why Choose Embedded BI?

    Embedded business intelligence is different.
    Linked to a specific application or
    set of applications, it provides such
    benefits as:

    • Significantly lower cost.
    • The ability to tailor applications to specific
      needs and to retain those changes
      over time – in other words, tailoring
      without loss of upgradability.
    • The ability to perform prototyping to
      provide near-term approximate answers
      while accelerating the ultimate solution
      design. With embedded BI, organizations
      do not have to spend weeks or
      months analyzing a specific problem
      before embarking on design. Embedded
      BI permits the setup of a quick
      prototype for testing, which can then
      be improved by going back and refining
      the ETL, the schema definitions, the KPI
      definitions, etc., until the new solution
      produces appropriate results.
      Other benefits include rapid implementation;
      vendor-provided updates,
      maintenance/training; and exposure to
      and learning opportunities via the vendor’s
      community of users and experts.
      Not all embedded approaches are
      created equal, however. Among the elements
      that increase value are:
    • A working framework from which
      to pull information and deliver value
      immediately, which will fast-track pertinent
      knowledge based on actual data.
      Look for solutions that are delivered
      with predefined data extracts and
      reporting structures.
    • A “starter kit” of common metrics and
      measures that are used throughout
      the industry. Utilities companies should
      look for metrics that focus on such
      concerns as outage duration, preventive
      maintenance, bill accuracy, queries
      resolved during an initial call from the
      customer, and so on.
    • Performance protection. While many
      business intelligence applications can
      operate comfortably within a production
      environment, that is not always the
      case. For very high volumes, it’s crucial
      to implement a solution that extracts
      data directly from the production operating
      system and holds it in a more efficient
      vehicle. This will maximize system
      performance and operational flow.
    • Flexibility. Business intelligence effectiveness
      depends on users’ ability to
      adapt information presentation to the
      way they work – not the other way
      around. Simple information access
      is not enough. Users must be able to
      filter and sort it in order to highlight
      the exact information they need when
      they need it. Predetermined analytics
      should provide not an end point but,
      instead, a launching pad from which
      users can create the performance indicators
      they need. Additionally, users will
      recognize problems and opportunities
      for improvement far more quickly when
      they can adapt graphical formats for
      fast and easy examination and analysis
      of evolving situations.
    • The ability to add and subtract business
      facts and other data quickly and
      easily. Users need to be able to see
      the consequences of, for instance, new
      regulations under consideration, a sudden
      population increase or decrease, or
      a rise or fall in the price of fuel or other
      supplies. In some cases, users may need
      to add an entire new source of data to
      permit effective response to new business
      imperatives; the best BI applications
      make that easy.
    • Provider expertise. The most effective
      solutions focus on the needs of a
      specific industry. The solution provider
      must have a broad knowledge of the
      industry’s business applications, regulatory
      environment and unique business
      challenges.
    • Appropriate sizing. An application that
      works with a single data source can
      address specific customer or organization
      needs. That will probably be inadequate,
      however, for executives who
      must address issues across multiple
      enterprisewide business processes.

    There is no denying, that embedded business
    intelligence requires balancing and
    trade-offs. Not all of these solutions can
    draw information from multiple, complex
    databases, and when they can’t, missing
    information can degrade the decisionmaking
    process. A second contentious
    area is the extent to which an application
    accommodates unique organizational
    needs; accommodations that decrease
    user time and effort may also, in the long
    run, restrict application upgradability.

    Implementing Embedded BI

    Implementing an embedded business intelligence
    solution is vastly different from
    implementing customized BI. While custom
    BI solutions generally rest on extensive
    and extended analysis, out-of-the-box solutions,
    as explained above, rest on a prototyping
    process in which you begin with
    approximation and move iteratively closer
    to the ideal by redefining and redesigning.

    Successful prototyping means avoiding
    overanalysis. Instead it’s prudent to move
    relatively rapidly through initial steps that:

    • Define KPIs. Begin with those most
      important to organizational success and
      add refinements later.
    • Determine the reporting structures
      (star schemas) that support the KPIs.
    • Map data sources to the structures.
    • Determine scheduling. Extract-frequency
      depends on how often information
      changes and the consequences of using
      outdated information. For tasks like
      product introductions, managers may be
      able to make sound decisions based on
      trends shown across several months of
      historic data, the most recent of which
      may be a week or two old. For tasks like
      informing managers of progress on a
      current outage resolution against the
      previous year’s trends, extracts every 15
      minutes may be too slow.

    These steps are all much easier with a vendor
    that provides considerable application
    power and guidance during the first implementation.
    Use the vendor’s experience
    and “conventional wisdom” to develop
    reasonable solutions to KPI measurement,
    then improve upon them with experience.
    Remember that the goal is getting a solution
    that starts producing answers almost
    immediately. While a vendor should also
    deliver long-term plans for upgrades, the
    initial solution has to fit today’s needs
    along with the configurability and flexibility
    to support tomorrow’s unanticipated
    conditions and changing goals.

    Does Embedded BI Work?

    Before they give embedded business
    intelligence a try, many organizations will
    have already experienced the failure or
    only partial success of large-scale, customized
    business intelligence projects.
    Additionally, because embedded solutions
    are almost always connected intimately
    to an application or set of applications,
    organizations are likely to encounter different
    vendors’ versions of embedded
    BI as they move across the IT structure.
    Therefore, it may be possible to move
    beyond traditional measures of application
    success like “how many people are
    using the product after a year” or “what
    is the level of user complaints.” Here are
    some metrics with which to compare various
    approaches to BI:

    • How long did it take to achieve usable
      results, i.e., were the KPIs defined in
      step 1 actually used or did the process
      take so long, the requirements changed
      before the solution was delivered?
    • What were the costs in time and money?
    • Were investments made in the application
      at the start usable throughout the
      project? In other words, could we add
      new data or change parameters along
      the way without throwing out everything
      accomplished to that point?
    • Was rollout quick and reasonably painless?
      Web-based solutions tend to score
      high in this category, since their familiarity
      minimizes user training. They also
      ease the IT burden, since there’s no
      software to roll out.

    The Case for Embedded BI

    Today’s challenges and tomorrow’s trials
    and opportunities necessitate a state of
    readiness and business agility that can
    only be supported by rapid, ready access
    to business-critical information. Managers
    and executives need as much help as
    technology can offer to condense volumes
    of complex, disparate data from multiple,
    mission-critical data sources into a cohesive
    knowledge base. From this base, they
    can identify risks, determine trends, more
    accurately forecast, and identify causeand-
    effect relationships that might not
    have otherwise been apparent.

    With this information, organizations can
    optimize operations, reduce the cost of
    servicing customers and identify opportunities
    to sell new products and services.
    This level of intelligence promotes a culture
    of continuous improvement, which
    enhances the ability to predict changing
    market conditions, and it offers executives
    the ability to highlight and respond to concrete
    opportunities for business optimization
    and an enhanced bottom line.

    Curbing Theft of Service Starts With Getting to Know Your Customer

    Theft of service has been discussed
    more in the boardroom in the last
    three years than it has in the past
    30. Why all the new interest?

    In short, the boardroom is responding
    to its perception of how some consumers
    may react to skyrocketing energy and
    resource prices. Unprecedented rates
    are pushing more consumers than ever
    before to resort to actions like stealing
    service. There is even word of an emerging
    market for professional “service tamperers,”
    who are paid to assist consumers
    in bypassing metered service.

    Skyrocketing prices aren’t the only factor.
    Sarbanes-Oxley, competition created
    through deregulation, and the public and
    state regulator’s demand for better corporate
    governance all contribute to making
    theft of service a 20th-floor topic.

    No territory is immune and no service
    is safe from theft. With a little knowledge,
    a little nerve or the right contractor, anyone
    can tap into electric, gas and water
    services. The majority of theft occurs in
    the residential sector, but the majority of
    all revenues lost, estimated between 1 and
    3 percent of total distribution revenues,
    occurs in the commercial account sector.
    A utility with $1 billion in revenues potentially
    loses between $10 million and $30
    million each year to theft, and more than
    two-thirds of that loss is within the relatively
    small commercial account sector.

    We will focus on commercial accounts
    to discuss how and where the most significant
    theft occurs and what leading North
    American utilities can do and are already
    doing to curb such illicit activity.

    Stealing Around the Meter

    The smarter thieves do not steal all of
    the service or tamper with the meter
    directly. This is an ill-conceived act with
    short-term payoff, since it’s likely to generate
    a zero-read event at the utility.

    Locking and sealing programs, solidstate
    meters and automated meter reading
    (AMR) tamper flags all help protect
    the meter but not the energy. A smart
    thief knows this and tampers around the
    meter instead.

    AMR tamper flags do not detect
    bypasses that divert energy around
    the meter, while locks and seals do not
    prohibit theft outside of the meter box.
    Analysis of theft cases among Detectent’s
    AMR-enabled customers between
    January and December 2006, revealed
    that most thefts occur without any corresponding
    tamper flag. Moreover, the
    majority of tamper flags triggered don’t
    even indicate a theft case or any other
    spurious activity. Most of them are the
    result of normal daily activities, such as
    contractor-induced outages and maintenance
    service calls, as well as external
    factors like vibrations from nearby
    machinery. The enormous volume of false
    alarms tends to minimize the impact of
    those few valid tamper flags.

    So the question is, how can a utility
    defend against theft of service when
    the thieves are getting smarter and the
    technology deployed does not address
    the major losses that occur outside of the
    meter box?

    Protect the Service, Not the Meter

    Service protection is not achieved through
    locks and seals, although a locking and
    sealing program is an important component
    of meter security.

    The service can only truly be protected
    by recognizing anomalous consumption.
    To do this, however, we first have to
    understand what consumers do with the
    energy and resources they receive. Then
    models can be developed to establish
    expected consumption patterns. Erroneous
    consumption patterns can then
    be detected as deviations from normal
    expected behavior. As a result, both theft
    around the meter and direct tampering
    can be exposed.

    Figure 1: An advanced industry model breaks down standard industry code into more specific groups | Figure 2: Chain affiliation lends additional detail to the consumer model.Knowing the Consumer

    Customer information systems (CIS) were
    designed to facilitate customer identification
    and business transactions such as
    billing. They weren’t designed to store
    the variety of data involved with getting
    to know the customer better. A great
    illustration of this is the classification of
    customers by standard industry code. A
    standard industry code of “full-service
    restaurant” provides no indication of how
    a restaurant uses energy and resources
    to fulfill the needs of its customers. With
    such simplistic coding, “full-service restaurant”
    can include everything from
    a large steak and seafood house to a
    small sandwich deli. Both use energy and
    resources in completely different ways to
    serve their customers.

    If, however, the data indicated that
    the full-service restaurant was actually
    a pizza parlor with an eat-in area, an
    extrapolation on the energy needs of that
    restaurant could be made. For instance,
    you would expect to find a certain number
    of pizza ovens to meet the demands of a
    certain dining-area capacity. You would
    expect seating areas to be air-conditioned
    in the warm months and heated in the
    cooler months. You would also expect
    refrigeration in scale with the number of
    potential patrons served.

    One approach to protecting service collects
    all of this information to create peer
    groups and predictive models.

    Consumer Modeling and Peer Grouping

    Basic consumer models take into consideration
    the standard industry code, but
    more advanced models expect that code
    to be broken down into greater detail.
    These models are designed to understand
    energy and resource consumption needs
    based on in-depth knowledge of expected
    usages. For the example shown in Figure
    1, the broad category of full-service restaurant
    is refined by cuisine, then by a
    sub-type and, finally, into groups depending
    on the environment where the food
    product is served. When you break data
    down into subcategories, you can determine
    that the consumption of energy for,
    as an example, a takeout pizza restaurant
    is significantly different than that of an
    eat-in pizza restaurant.

    The addition of chain affiliation can
    make these models even more precise
    in determining expected energy and
    resource usage, because most locations
    of the same chain will have standard
    equipment specifications. In Figure 2,
    expected consumption varies not only
    by the restaurant environment but also
    by the national chain that dictates the
    installed equipment specification.

    Applying the Logic to Other Business Types

    The restaurant example may seem like
    an obvious case for consumer modeling,
    but in reality, consumer modeling can be
    applied to all business types. Hotels and
    motels can be classified by number of
    rooms and leisure and business amenities
    provided. Gas stations can be classified by
    complimentary services such as car wash,
    shopping and restaurant facilities.

    Gathering the Data

    The obvious next question then is, if the
    data does not reside in the customer
    information system, then from where does
    all this information about the consumer
    come? It might surprise you to learn that
    the information is gathered from a huge
    variety of publicly available sources.

    Actual Consumption Patterns Compared With the Expected Norm

    The Internet offers access to vast
    repositories of information on commercial
    consumers. These are commercial
    entities that can create good will, and
    benefit in other ways from having the
    public know about them. All we have to
    do is know how and where to obtain that
    information for our own purposes.

    There are numerous databases you can
    purchase that offer a wealth of information
    about commercial consumers, such
    as type of business, products and services
    offered; environment in which the
    products and services are offered; size
    of business; location; number of employees;
    hours of operation; sales volumes;
    and so forth. This information, once
    paired with data from the utility’s own
    computer information system, provides
    quite a clear picture of how commercial
    consumers utilize the energy and
    resources delivered to them.

    Establishing Peer Groups

    Gathering data on a single consumer is
    only valuable if there is a like set of data
    on similar consumers available for purposes
    of comparison. Therefore, peer
    groups must be established and similar
    data elements must be gathered for each
    account in a peer group.

    Peer groups can be developed for each
    service territory as well as nationally. Geographically,
    local peer groups may represent expected energy and resource usage
    more reliably, but national peer grouping
    offers a larger base with which to make
    comparisons, develop trends and establish
    expected norms. Figure 3 shows plots of
    consumption for various businesses of
    the same type against an expected norm
    calculated from the peer group’s data. The
    expected norm is illustrated by the dark
    magenta line and outliers are considered
    those that do not follow the same pattern.
    Actual monthly consumption is not a significant
    factor, as this particlar grouping is
    not concerned with service capacity.

    Eliminating False Alarms

    It’s possible to become overwhelmed with
    increases in false alarms as the volume
    of data being processed increases. This
    is especially true when comparing businesses
    based on expected similarities. Any
    number of things can change a business’s
    consumption pattern such that it appears
    anomalous – changes in ownership or
    management, a shift in operating hours,
    an incorrect business classification, multiple
    meters under different
    names, seasonal conditions,
    closures for remodeling, even
    the simple act of upgrading
    equipment. The key to
    successfully identifying
    anomalous consumption is to
    filter out these false alarms
    through a process of screening
    and analysis.

    Before a seemingly anomalous
    account is identified for
    investigation, it gets reviewed
    and verified for accuracy and
    any potential changes in how
    business is conducted. This
    process tends to catch most of the false
    alarms and eliminates false cases from
    investigation. Furthermore, it corrects the
    account’s peer grouping to ensure integrity
    in other comparisons.

    This process is illustrated in Figure 4.
    The confidence factor on the right side
    indicates the likelihood of a theft being
    discovered. The more thorough the
    screening and analysis, the higher the
    likelihood of success in the field.

    Other Sources of Data

    A screening and analysis process can pinpoint most false alarms.Another significant data source is to look
    at other services delivered. Whether
    delivered by one utility or by separate
    utilities, multiple service data can add
    tremendous value to the analysis process.
    A great example would be a Laundromat.
    A Laundromat typically has gas dryers
    and electric washers. If we know the
    usage of either gas, electric or water, then
    consumption projections can be made for
    the other services too. It is very safe to
    extrapolate this way because most Laundromat
    customers will use an amount of
    water and electricity in direct proportion
    to the amount of gas used for drying. If
    one of the services is not metered properly,
    it will show up as anomalous based
    on its ratio to the other metered services.
    Even if all services had been tampered
    with, the ratio of stolen service is not
    likely to be consistent, so it would still
    appear as anomalous usage.

    These same energy use ratios can
    be developed and modeled for almost
    all business types, whether they’re
    restaurants, hotels, service stations, public
    facilities, etc. Energy use ratios have
    consistently identified significant theft
    cases that date back years and would otherwise
    have remained undetected.

    Summary

    Theft is not going away and is only likely
    to increase in the coming years. That’s the
    bad news. The good news is that there are
    effective tools for identifying anomalous
    usage patterns. Data gathered in different
    service areas can be used to build
    consumer models and peer groups. By
    integrating data from AMR tamper flags
    with those models, valid theft cases can be
    identified more accurately.

    Communication Technology Considerations for the Intelligent Utility Network

    Most industries have experienced
    significant transformations over
    the past few decades, which have
    resulted in improved business processes
    and a more efficient operating structure.
    These transformations, to a large extent,
    have been facilitated by applying appropriate
    technology improvements, in concert
    with changed business processes.

    The utility industry is just starting to
    take advantage of these transformational
    opportunities, which are, in turn, affected
    by several significant operational challenges,
    including:

    • An aging transmission & distribution
      (T&D) workforce, coupled with
      fewer workers entering the field
      and increased training costs and
      requirements;
    • An aging T&D physical infrastructure,
      making it more challenging to manage
      the distribution network in a safe and
      efficient manner;
    • Increased pressure to provide a reliable
      distribution network in support of
      today’s digital economy; and
    • Increased demand for products and
      services that allow customers to
      manage their energy use more efficiently,
      thus reducing future power
      requirements.

    Solution building blocks that are critical to
    the development of the intelligent utility
    network (IUN) include:

    • Business and process models;
    • Architecture design, including a common
      data model;
    • Advanced analytics;
    • Advanced communication network
      architectures, and embedded sensors
      and actuators;
    • Additional technical infrastructure
      elements as required and
    • Industry partnerships, like IBM/Cisco.

    IBM’s focus in this effort is to put a sensor
    fabric in place across the utility value
    chain, from the data sources, to the
    applications that use the data, and then
    to business support and operations (see
    Figure 1). This model provides a utility
    with the flexibility and agility to respond
    to ever-changing requirements and integrate
    or upgrade new operational environments.
    In other words, the intelligent
    network unifies the utility’s equipment,
    systems, customers and employees. It
    enables on-demand access to information
    about customers, assets and the T&D grid,
    which is then used to continuously optimize
    operations and planning.

    Intelligent Utility Network Functions

    The IUN enables companies to manage
    operations across the entire enterprise,
    rather than in individual business units or
    departments. One overarching objective of
    the IUN is the deployment of various sensors
    along the grid to collect and analyze
    a wide variety of data in order to automate
    certain actions to the grid (such as reconfiguring
    the grid automatically). While
    there are many elements of the IUN, the
    following form the core design:

    • Feature 1: Enable Internet Protocol
      Communications

      The IUN is based on the conversion
      from analog to digital, providing greater
      quality and access to operating information.
      By enabling all devices on the
      network through IP (Internet protocol)
      communication, companies will be able
      to grow their networks quickly and take
      advantage of new innovations. Examples
      of these technologies include wireless,
      broadband over power line (BPL)
      and voice over IP (VoIP).
    • Feature 2: Open Technologies, Open
      Standards, Consistent Architecture

      With the IUN, the complexity of using
      devices is decreased by adoption of

      industry standards. Internal users open
      devices with technologies already familiar
      to them, thus lowering the cost of
      acquiring new talent for development
      and maintenance. A common architecture
      drives the implementation, incorporating
      industry standard data models,
      open technology communications, and
      adoption by business partners.

    • Feature 3: Enable/Support New
      Business Opportunities

      As new business models are created,
      energy companies need to explore new
      sources of revenue. By building flexibility
      into their network design, businesses
      will be able to expand quickly
      and accommodate new acquisitions or
      divestures. Examples sweeping the utility
      industry include IP telephony, broadband
      Internet access and security-monitoring
      services.
    • Feature 4: Consolidation Through
      Public and Private Networks

      To realize the promise of convergence,
      utilities need a clear strategy for moving
      from a “disconnected” business to
      one that has key processes integrated
      through collaborative portals. Today the
      convergence of voice, video and data is
      set to transform business relationships
      and collaborative strategies forever.
    • Feature 5: Security Based on
      Data/Applications

      Regardless of the current or future
      infrastructure, information access
      and security is critical to energy
      and utility companies. By migrating
      to the IUN, companies can leverage
      the latest technology advances for
      securing people and devices throughout
      the network.
    • Feature 6: Business Resilience of
      Critical Infrastructure

      Today’s dynamic business climate
      mandates a shift from the traditional,
      reactive disaster recovery methods
      to a more process-oriented, proactive
      approach to information availability and
      business protection.

    IUNs in Utilities

    In the utility industry the device, network
    and data domains reach throughout many
    established networks, including corporate
    enterprise wide area network (WAN) and
    local area network (LAN), land mobile
    radio, public cellular, supervisory control
    and data acquisition (SCADA) and microwave.
    To put it in historical context, these
    networks enabled discrete capabilities
    prior to the existence of many common
    carrier services. Within utilities’ operating
    environments, land mobile radio predates
    cellular services; and microwave existed
    prior to T-1, frame relay and Internet service
    providers. When the current utility
    networks are viewed holistically, in present-
    day terms with present-day capabilities,
    significant opportunities for reduced
    costs and enhanced capabilities emerge.

    At the physical infrastructure level,
    the IUN’s value is in facilitating the
    transition to a common IP-based environment
    in which each element of the network
    is assessed and optimized according
    to the best, lowest-cost IP service available.
    The IUN is not a panacea of perfect
    solutions but rather a target that establishes
    connections from the data source
    to the decision maker and back. This
    provides the functionality for tools like
    end-to-end security, resource management,
    data mining, common business
    views, and adaptive computing and telecommunications.

    Building the Right Foundation

    The IUN infrastructure connects and
    integrates information from substations,
    IEDs/assets and smart meters via an
    IP-enabled network. The network infrastructure
    is typically divided into areas
    based on physical, logical or wireless connectivity.
    The basic building block of any
    network is the LAN, which can be as small
    as a switch/router in a one-room building
    or large enough to support multiple buildings
    in a campus environment with several
    switches and/or routers. Historically LANs
    were based on layer 2 of the open systems
    interconnection model. As more network
    devices and applications began to support
    IP, LANs migrated to layer 3. Although
    routers can be found in LANs, the predominant
    devices are layer 2 or layer 2/3
    switches. The LAN will normally provide
    the highest speed to the end user or
    device, which is 100 megabits per second
    (Mbps) to 10 gigabits per second (Gbps).

    A WAN spans multiple cities, states
    or countries and is used to connect the
    LANs. Like LANs, WANs have also been
    migrating from a layer-2 environment to a
    layer-3 network that supports IP. Although
    the bandwidth of the WAN can be up to 10
    Gbps, it normally supports multiple locations
    simultaneously.

    The metropolitan area network (MAN) is
    typically between a LAN and WAN in size.
    It may span a city, county, state or several
    states, depending on the requirements.
    MANs usually take advantage of optical
    technologies like synchronous optical network
    technology or dense wavelength division
    multiplexing; some MANs may even
    use Ethernet via fiber optic cables or BPL.
    The bandwidth of the MAN can be up to 10
    Gbps, while supporting multiple locations.

    In most cases, LANs, MANs and WANs
    are created by connecting network
    devices physically. Although microwave
    and satellite communications have been
    around for a long time, newer and less
    expensive wireless technologies, like wireless
    fidelity (WiFi), have become increasingly
    popular. This wireless technology
    includes IEEE 802.11a, b and g and supports
    speeds up to 54 Mbps. In addition,
    these frequencies do not require a license.
    For those applications that do not require
    higher speeds and when it is not costeffective
    to pull cable or lay fiber, wireless
    is a very good alternative. Keep in mind
    that wireless is not meant to replace the
    traditional physical connectivity of a LAN,
    MAN or WAN, but is designed to augment
    and expand the existing infrastructure, as
    well as provide additional options. Wireless
    also provides additional options such
    as ubiquitous access, two-way communications,
    data capture and zero-touch service.

    In the past, there were separate communications
    infrastructures for voice,
    call centers, video conferencing, video
    surveillance, security, SCADA and building
    management systems for air-conditioning,
    heat, ventilation and fire. Substations are
    a good example of having separate infrastructures
    with analog lines for phones,
    RS-232 connectivity for SCADA (or remote
    terminal units), coaxial cables for video
    surveillance, fiber for LAN connectivity
    and card systems for physical security.
    More and more of these systems are being
    converged onto one network infrastructure
    in an effort to reduce implementation
    and support costs, while driving significant
    increases in productivity by providing better-
    quality information more quickly.

    Security is a common requirement in all
    the areas and technologies. Although most
    companies are 75 to 95 percent vulnerable
    to external sources via the Internet, these
    same companies are almost 100 percent
    vulnerable to internal exploitation. This
    vulnerability comes from the inability to
    control the desktop – the fact that antivirus
    software is based on signatures and
    cannot address zero-day attacks, physical
    connectivity to the internal network that
    isn’t controlled and vendors or contractors
    with non-company PCs who can and do
    connect to the internal network. Nowadays
    firewalls are being deployed internally
    throughout the company to provide
    security zones. One of these important
    zones would be in front of the data centers
    – and even within data centers – to protect
    critical applications and data. Encryption
    is another security technology that
    used to be deployed across the Internet
    exclusively but is now being used within
    companies to provide an additional level of
    security and data protection.

    Leveraging the Foundation: The IUN in Action

    Truck of the Future
    Another way to leverage the IUN for
    increased efficiency and productivity is to
    extend it to utility trucks. These high-tech
    vehicles are sometimes called the “Truck
    of the Future.” Obviously, the trucks use
    wireless technologies to send and receive
    information instantly. Work orders and
    tickets are opened, updated and closed in
    real time, regardless of the truck’s location,
    eliminating the need for truck crews
    to return to the office for printouts.

    One wireless technology used to enable
    the Truck of the Future is the Cisco wireless
    mesh network, which can span a city
    or metropolitan area (see Figure 2). The
    truck thus becomes a rolling wireless hot
    spot that is never out of touch. This also
    allows the truck to “heal” a part of the
    wireless network if it goes down. When the
    truck is dispatched to a work site where
    part of the wireless network is down, the
    truck itself (with its built-in wireless capabilities)
    can bridge the wireless network
    until the crew completes its work.

    Cisco’s wireless technology provides
    RFID capabilities, allowing tracking of the
    truck, personnel and assets. With RFID,
    assets like special tools can be identified/
    located anywhere in the city or metropolitan
    area to minimize loss or duplication.

    Like their wired counterparts, wireless
    networks are also being converged. The
    general trend is to integrate voice, video
    and data into unified communications
    (UC). This allows centralized experts to see
    equipment problems truck crews encounter
    at the site and provide real-time advice.
    By extending collaboration applications to
    the trucks, utilities save time, money, and,
    in some cases, lives.

    Automating the Work-Order Process
    In addition to receiving work orders in real
    time, it is also possible to automate the
    entire work-order process. Mobile workers
    can use voice recognition software, realtime
    pictures and video clips when they
    submit or escalate the work-order process,
    receive approval for repairs, order
    parts or request additional resources.
    With these capabilities, repairs are performed
    more quickly and efficiently.

    Securing Distant Assets
    Another way to leverage the IUN, and
    another benefit of UC, is the ability to
    provide security via video surveillance. In
    many cases, the video is captured via wireless
    surveillance cameras and integrated
    into enterprise security software systems.
    This allows energy and utility companies
    to provide enhanced protection of key
    assets (like substations) that are located
    remotely (see Figure 3). It is also possible
    to integrate card reader systems (e.g., for
    magnetic door locks) and RFID into security systems to provide a more complete
    view of security, personnel and assets.

    While leveraging this same utility network,
    it is also common to provide network
    security with an integrated firewall, virtual
    private network (VPN) and intrusion
    prevention. VPNs can be created with IP
    security (IPSec), secure socket layer (SSL)
    or multiprotocol label switching (MPLS).
    Although all three of these technologies
    enable VPN via some form of tunneling,
    only IPSec and SSL provide encryption. If
    encryption is needed across an MPLS environment,
    IPSec is typically added.

    Wireless Smart Meters
    Another way to leverage the IUN is to
    extend it to electric, gas and water meters,
    as shown in Figure 4. Wireless meter reading
    has existed for many years, but technology
    has introduced some valuable new
    capabilities, such as two-way communications,
    continuous connectivity for short
    interval (15-minute) reads, outage event
    notification, remote connect/disconnect
    and real-time price signals to customers.
    Automated meter reading also improves
    accuracy; reduces head count and turnover
    rate; and delivers overall cost savings
    and increased efficiency and productivity.

    Summary

    The intelligence of a utility network hinges
    on the quality of its information and its
    ability to access and integrate that information
    anywhere, anytime. It must be built
    upon the right infrastructure, along with
    effective enterprise integration and business
    applications for maintaining intelligence
    both on the grid and among the
    service functions designed to support it.
    The benefits of establishing such a
    network are manifold. By unifying a utility’s
    equipment, systems, customers and
    employees, the IUN enables on demand
    access to information that the utility can
    use to optimize operations and planning.
    In the face of so many operational challenges
    and an ever-changing environment,
    the utility industry must apply
    these types of technology solutions for
    its ongoing success.

    Additional content was contributed by
    Ron Aberman and Jeffrey S. Katz, IBM.

    The Ingredients That Go Into Spam

    “Never watch sausage being made,” folks say, lest you would find the process so unappetizing that you’d never eat it again. Regardless of how you feel about Spam®, the venerable luncheon meat, all search marketers must understand the ingredients that comprise search spam.

    In our last column, we explored the dangers of spam, which include bad publicity and getting banned from the search engines. We also looked at a spam technique called cloaking, in which spammers feed a different page to the search spider than what they show to real people.

    This time around, let’s look at stupid content tricks. The goal isn’t to teach you how to use spam techniques, but rather to help you spot them on your site (oh no!) or on your competitors’ (so you can report them). Content spammers generally employ two kinds of tricks: page stuffing and doorway pages. Let’s look at each one in turn.

    Page Stuffing

    Content spammers treat their Web pages like a Thanksgiving turkey. They stuff as much extra content into each page as possible, hoping they’ll include something that search engines like. Let’s look at the three major types of content spamming tricks:

    Hidden text

    Don’t use tricky techniques to show the search spider text that is not seen when a reader looks at your page. In the old days (two years ago), content spammers tried displaying text with the same font color as the background color. Today the trendy spammer uses style sheets to write keywords on the page that are then overlaid by graphics or other page elements. Whatever the technique, if the search spider sees your words but people never do, that’s spam. The only exception to that rule is HTML comments, which are ignored by both the spider and the browser.

    Duplicate tags

    In times past, the use of multiple title tags (and other meta tags) was rumored to boost rankings. Although few search engines fall for that trick nowadays, spammers have adjusted. The same style sheet approach that can hide text can also overlay text on top of itself, so it is shown once on the screen but listed multiple times in the HTML file, adding emphasis for the repeated keywords.

    Keyword stuffing

    Also known as keyword loading, this technique is really just an overuse of sound content optimization practices. Do emphasize your target keywords on your search landing pages, but don’t overuse them. Dumping out-of-context keywords into an <img> tag’s alternate text attribute, or into <noscript> or <noframes> tags, are variations of this same unethical technique.

    Search engines have gotten much better at detecting page stuffing in recent years, but the cat-and-mouse game continues. Each year, spammers develop new content tricks and search engines try to catch them.

    Some extremely clever and hardworking people really can fool the search engines with advanced versions of these tricks. Most of the time, however, spam techniques are like stock tips: Once you hear the tip, it is probably too late; the stock price has already gone up and the search engines are already implementing countermeasures.

    What should you do instead of page stuffing? Write your pages for your readers. Yes, use the popular keywords on your pages, but don’t repeat them endlessly like mindless drivel. Write engaging and informative pages that use the right keywords and you’ll attract the search engines. Moreover, when a reader gets to the page, your copy will persuade them to take the next step and buy something.

    Doorway Pages

    A few years ago, doorway pages were all the rage. Every search marketing “expert” was explaining how to create pages whose sole purpose is to appeal to search engines. The idea was that searchers came from the search engine to your site through a “doorway.” Some called them entry pages, others gateway pages, but the idea was the same. If your page exists only to get search rankings, it’s probably a doorway page.

    In a sense, doorway pages are doors that only open “in” because they are not part of the mainstream navigation of your website. Doorway pages link to other pages within your website, but none of your other pages link to them.

    Spammers use various techniques to get high search rankings for doorway pages, such as cloaking (which we discussed in our last column), page stuffing, and link spam (which we’ll tackle in our next column). Search engines have tightened up their detection mechanisms to avoid high rankings for doorway pages, but a smart spammer can still slip them through.

    What should you do instead of doorway pages? Create search landing pages that are optimized for both search engines and people. Like doorway pages, search landing pages are designed to be the first page a searcher sees on your site when coming from a search engine. Unlike doorway pages, search landing pages are legitimate pages intrinsic to your navigation that are linked both to and from many other pages on your site. In fact, they are designed for people first and for search engines second.

    Some paid search landing pages can be legitimately designed to be closer to doorway pages. Because you may want to target many more keywords for paid search than you can optimize for organic search, you can create paid-placement landing pages that are not part of the mainline site navigation – with links leading into the site only. The difference between these pages and doorway pages is they are not being used for organic search at all. (In fact, you should use a robots tag or robots .txt file to block them from organic search.) Because you are not fooling the organic engines with these pages, they are not spam.

    For any pages that you want to optimize for organic search, just make sure they are heavily linked into the main navigation path of your site. That will ensure that the search engines treat them as landing pages rather than doorway pages.

    Comedian Buddy Hackett joked that his mother’s menu consisted of two choices: Take it or leave it. The search engines’ terms of service (their rules for you to follow) are similar. Search engines decide which techniques are spam and there’s no higher court for an appeal.

    Those who engage in content spam run a grave risk of having their sites banned by the search engines. So don’t be reckless. Stick to writing for readers and you won’t go wrong.

    That’s it for content spam. In the last part of the three-part series, you’ll bone up on link spam, so that you’ll recognize the tricky link techniques that might fool the search engines.

    Mike Moran is an IBM Distinguished Engineer and product manager for IBM’s OmniFind search product. His books (Search Engine Marketing, Inc. and Do It Wrong Quickly) and his Biznology blog are found at MikeMoran.com.