Turning Information Into Power

Around the world, utilities are under pressure. Citizens demand energy and water that don’t undermine environmental quality. Regulators seek action on smart grids and smart metering initiatives that add intelligence to infrastructure. Customers seek choice and convenience – but without additional costs.

Around the globe, utilities are re-examining every aspect of their business.

Oracle can help. We offer utility experts, mission-critical software applications, a rock-solid operational software suite, and world-leading middleware and technology that can help address these challenges. The result: flexible, innovative solutions that increase efficiency, improve stakeholder satisfaction, futureproof your organization – and turn information into power.

Utilities can begin with one best-of breed solution that addresses a specific pain point. Alternatively, you can implement several pre-integrated applications to ease the development and administration of cross-departmental business processes. Our complete applications and technology footprint can be standardized to focus on accountability and reduce the resources spent on vendor relations.

Oracle Is A Leader In Utilities: 20 of the Top 20 Global Utilities Get Results With Oracle

Oracle provides utilities with the world’s most complete set of software choices. We help you address emerging customer needs, speed delivery of utility-specific services, increase administrative efficiency, and turn business data into business intelligence.

Oracle Utilities offers the world’s most complete suite of end-to-end information technology solutions for the gas, water, and electric utilities that underpin communities around the world. Our revolutionary approach to providing utilities with the applications and expertise they need brings together:

  • Oracle Utilities solutions, utility-specific revenue and operations management applications:
    • Customer Care and Billing
    • Mobile Workforce Management
    • Network Management System
    • Work and Asset Management
    • Meter Data Management (Standard and Enterprise Editions)
    • Load Analysis
    • Load Profiling and Settlement
    • Portfolio Management
    • Quotation Management
    • Business Intelligence
  • Oracle’s ERP, database and infrastructure software:
    • Oracle E-Business Suite and other ERP applications
    • Times Ten for real-time data management
    • Data hubs for customer and product master data management
    • Analytics that provide insight and customer intelligence
    • ContentDB, SpatialDB and RecordsDB for content management
    • Secure Enterprise Search for enterprise-wide search needs
  • Siebel CRM for larger competitive utilities’ call centers, customer order management, specialized contacts and strategic sales:
    • Comprehensive transactional, analytical and engagement CRM capabilities
    • Tailored industry solutions
    • Role-based customer intelligence and pre-built
  • Oracle’s AutoVue Enterprise Visualization Solutions:
    • Make business and technical documents easily accessible by all enterprise users
    • Expedite document reviews with built-in digital annotations and markups
    • Boost the value of your enterprise system with integrated Enterprise Visualization
  • Oracle’s Primavera Solutions:
    • Effectively manage and control the most complex projects and project portfolio
    • Deliver projects across generation, transmission and distribution, and new clean-energy ventures
    • Optimize a diminishing but highly skilled workforce

Stand-alone, each of these products meets utilities’ unique customer and service needs. Together, they enable multi-departmental business processes. The result is an unparalleled set of technologies that address utilities’ most pressing current and emerging issues.

The Vision

Cross-organizational business processes and best practices are key to addressing today’s complex challenges. Oracle Utilities provides the path via which utilities may:

  • Address the "green agenda:"
    • Help reduce pollution
    • Increase efficiency
    • Complete software suite to enable the smart grid
  • Advance customer care with:
    • Real-time 360-degree views of customer information
    • Tools to help customers save time and money
    • Introduce or retire products and services quickly, in response to emerging customer needs
  • Enhance revenue and operations management:
    • Avoid revenue leakage across end-to-end transactions
    • Increase the visibility and auditability of key business processes
    • Manage assets strategically
    • Bill for services and collect revenue cost-effectively
    • Increase field crew and network efficiency
    • Track and improve performance against goals
    • Achieve competitive advantage with a leading-edge infrastructure that helps utilities respond quickly to change
  • Reduce total cost of ownership through access to a single global vendor with:
    • Proven best-in-class utility management solutions
    • Comprehensive, world-class capabilities in applications and technology infrastructure
    • A global 24/7 distribution and support network with 7,000 service personnel
    • Over 14,000 software developers
    • Over 19,000 partners

Strategic Technology For Every Utility

Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the utilities industry – from generation and transmission to distribution and retail services. And when you run Oracle applications on Oracle technology, you speed implementation, optimize performance, and maximize ROI.

When it comes to handling innovations like daily or interval meter reading, installing, maintaining, and replacing plant and linear assets, providing accurate bills and supporting your contact center and more, Oracle Utilities is the solution of choice. Utilities succeed with Oracle. Oracle helps electric, gas, water and waste management meet today’s imperatives to do the following:

  • Help customers conserve energy and reduce carbon footprints
  • Keep energy affordable
  • Strengthen and secure communities’ economic foundation

Meeting the Challenges of the Future, Today

Utilities today need a suite of software applications and technology to serve as a robust springboard from which to meet the challenges of the future.

Oracle offers that suite.

Oracle Utilities solutions enable you to meet tomorrow’s customer needs while addressing the varying concerns of financial stakeholders, employees, communities, and governments. We work with you to address emerging issues and changing business conditions. We help you to evolve to take advantage of new technology directions and to incorporate innovation into ongoing activity.

Partnering with Oracle helps you to futureproof your utility.

Managing the Plant Data Lifecycle

Intelligent Plant Lifecycle Management
(iPLM) is the process of managing a
generation facility’s data and information
throughout its lifetime – from initial
design through to decommissioning. This
paper will look at results from the application
of this process in other industries
such as shipbuilding, and show how those
results are directly applicable to the
design, construction, operation and maintenance
of complex power generation
facilities, specifically nuclear and clean
coal plants.

In essence, iPLM can unlock substantial
business value by shortening plant development
times, and efficiently finding,
reusing and changing plant data. It also
enables an integrated and transparent
collaborative environment to manage
business processes.

Recent and substantial global focus on
greenhouse gas emissions, coupled with rising and volatile fossil fuel prices, rapid
economic growth in nuclear-friendly Asian
countries, and energy security concerns,
is driving a worldwide resurgence in commercial
nuclear power interest.

The power generation industry is
undergoing a global transformation that
is putting pressure on traditional methods
of operation, and opening the door to substantial
innovation. Due to factors such
as the transition to a carbon-constrained
world, which greatly affects a generation
company’s portfolio mix decisions, the
escalating constraints in the global supply
chain for raw materials and key plant components,
or the fuel price volatility and
security of supply concerns, generation
companies must make substantial investments
in an environment of increasing
uncertainty.

In particular, there is a renewed interest
globally in the development of new
nuclear power plants. Plants continue
to be built in parts of Asia and Central
Europe, while a resurgence of interest
is seen in North America and Europe.
Combined with the developing interest in
building clean coal facilities, the power
generation industry is facing a large
number of very complex development
projects.

A key constraint, however, being felt
worldwide is a severe and increasing
shortage of qualified technical personnel
to design, build and operate new generation
facilities. Additionally, as most of the
world’s existing nuclear fleet reaches the
end of its originally designed life span, relicensing
these nuclear plants to operate
another 10, 20, or even 30 years is taking
place globally.

Sowing Plant Information

iPLM can be thought of as lifecycle
management of information and data
about the plant assets (see Figure 1). It
also includes the use of this information
over the physical plant’s complete lifecycle
to minimize project and operational
risk, and optimize plant performance.

This information includes design
specifications, construction plans, component
and system operating instructions,
real-time and archived operating data,
as well as other information sources and
repositories. Traditionally, it has been difficult to manage all of this structured and
unstructured data in a consistent manner
across the plant lifecycle to create a single
version of the truth.

In addition, a traditional barrier has
existed between the engineering and
construction phases, and the operations
and maintenance phases (see Figure 2).
So even if the technical issues of interconnectivity
and data/information management
are resolved via an iPLM solution, it
is still imperative to change the business
processes associated with these domains
to take full advantage.

Benefits

iPLM combines benefits of a fully integrated
PLM environment with the connection
of an information repository and flow
of operational functions. These functions
include enterprise asset management
(EAM) systems. Specific iPLM benefits are:

  • Ability to accurately assess initial
    requirements before committing to
    capital equipment orders;
  • Efficient balance of owner requirements
    with best practices and regulatory compliance;
  • Performance design work and simulation
    as early as possible to ensure the
    plant can be built within schedule and
    budget;
  • Better project execution with real-time
    information that is updated automatically
    through links to business processes,
    tasks, documents, deliverables
    and other data sources;
  • Design and engineering multi-disciplinary
    components – from structure
    to electrical and fluid systems – to
    ensure the plant is built right the first
    time;
  • Ability to virtually plan how plants and
    structures will be constructed to minimize
    costly rework;
  • Optimization of operations and maintenance
    processes to reduce downtime
    and deliver long-term profits to the
    owners;
  • Ensuring compliance to regulatory and
    safety standards;
  • Maximizing design and knowledge
    reuse from one successful project to
    another;
  • Managing complexity, including sophisticated
    plant systems, and the interdependent
    work of engineering consultants,
    suppliers and the construction
    sites;
  • Visibility of evolving design and changing
    requirements to all stakeholders
    during new or retrofitting projects; and
  • Providing owners and operators a primary repository to all plant information
    and the processes that govern them
    throughout their lifecycle.

Benefits accrue at different times in the
plant lifecycle, and to different stakeholders.
They also depend heavily on the consistent
and dedicated implementation of
basic iPLM solution tenets.

Value Proposition

PLM solutions enable clients to optimize
the creation and management of complex
information assets over a projects’
complete lifecycle. Shipbuilding PLM, in
particular, offers an example similar to the
commercial nuclear energy generation
ecosystem. Defense applications, such as
nuclear destroyer and aircraft carrier platform
developments, are particularly good
examples.

A key aspect of the iPLM value proposition
is the seamless integration of data
and information throughout the design,
build, operate and maintain processes
for industrial plants. The iPLM concept is
well accepted by the commercial nuclear ecosystem. There is an understanding
by engineering companies, utilities and
regulators that information/data transparency,
information lifecycle management
and better communication throughout the
ecosystem is necessary to build timely,
cost effective, safe and publicly accepted
nuclear power plants.

iPLM leverages capabilities in PLM,
EAM and Electronic Content Management
(ECM), combined with data management/
integration, information lifecycle management,
business process transformation
and integration with other nuclear functional
applications through a Service Oriented
Architecture (SOA)-based platform.
iPLM can also provide a foundation on
which to drive high-performance computing
into commercial nuclear operations,
since simulation requires consistent valid,
accessible data sets to be effective.

A hallmark of the iPLM vision is that it
is an integrated solution in which information
related to the nuclear power plant
flows seamlessly across a complete and
lengthy lifecycle. There are a number of
related systems with which an iPLM solution
must integrate. Therefore, adherence
to industry standard interoperability and
data models is necessary for a robust
iPLM solution. An example of an appropriate
data model standard is known as ISO
15926, which has recently been developed
to facilitate data interoperability.

Combining EAM and PLM

Incorporating EAM with PLM is an
example of one of the key integrations
created by an iPLM solution. It provides
several benefits. This includes the basis
for a cradle-to-grave data and work
process repository for all information
applicable to a new nuclear power plant.
A single version of the truth becomes
available early in the project design, and
remains applicable in the construction,
start-up and test, and turnover phases of
the project.

Second, with the advent of single-stem
licensing in many parts of the world (consider
the COLA, or combined Construction
and Operating License Application
in the U.S.), licensing risk is considerably
reduced by consistent maintenance of plant information. Demonstrating that the
plant being started up is the same plant
that was designed and licensed becomes
more straightforward and transparent.

Third, using an EAM system during construction,
and incrementally incorporating
the deep functionality necessary for EAM
in the plant operations, can facilitate and
shorten the plant transfer period from the
designers and constructors to the owners
and operators.

Finally, the time and cost to build a new
plant is significant, and delay in connecting
the plant to the grid for the safe generation
of megawatts can easily cost millions
of dollars. The formidable challenges
of nuclear construction, however, may be
offset by an SOA-based integrated information
system, replacing the traditional
unique and custom designed applications.

To help address these challenges, the
power generation industry ecosystem –
including utilities, engineering companies,
reactor and plant designers, and regulators
– can benefit by looking at methodologies
and results from other industries that
have continued to design, build, operate
and maintain highly complex systems
throughout the last 10 to 20 years.

Here we examine what the shipbuilding
industry has done, results it achieved, and
where it is going.

Experiences In Shipbuilding

The shipbuilding industry has many
similarities to the development of a new
nuclear or clean coal plant. Both are very
complex, long lifecycle assets (35 to 70
years) which require precise and accurate
design, construction, operation and
maintenance to both fulfill their missions
and operate safely over their lifetimes. In
addition, the respective timeframe and
costs of designing and building these
assets (five to 10 years and $5 billion to
$10 billion) create daunting challenges
from a project management and control
point of view.

An example of a successful implementation
of an iPLM-like solution in the shipbuilding
industry is a project completed
for Northrop Grumman’s development of
the next generation of U.S. surface combat
ships, a four-year, $2.9 billion effort.
This was a highly complex, collaborative
project completed by IBM and Dassault
Systemes to design and construct a new
fleet of ships with a keen focus on supporting
efficient production, operation
and maintenance of the platform over its
expected lifecycle.

A key consideration in designing, constructing
and operating modern ships
is increasing complexity of the assets,
including advanced electronics, sensors
and communications. These additional
systems and requirements greatly multiply
the number of simultaneous constraints
that must be managed within the
design, considered during construction
and maintained and managed during
operations. This not only includes more
system complexity, but also adds to the
importance of effective collaboration, as
many different companies and stakeholders
must be involved in the ship’s overall
design and construction.

An iPLM system helps to enforce standardization,
enabling lean manufacturing
processes and enhancing producibility of
various plant modules. For information
technology architecture to continue to be
relevant over the ship’s lifecycle, it is paramount
that it be based on open standards
and adhere to the most modern software
and hardware architectural philosophies.

To provide substantive value, both for
cost and schedule, tools such as real-time
interference checking, advanced visualization,
early-validation and constructability
analysis are key aspects of an iPLM solution
in the ship’s early design cycle. For
instance, early visualization allows feedback
from construction, operations and
maintenance back into the design process
before it’s too late to inexpensively make
changes.

There are also iPLM solution benefits
for the development of future projects.
Knowledge reuse is essential for decreasing
costs and schedules for future units,
and for continuous improvement of
already built units. iPLM provides for
more predictable design and construction
schedules and costs, reducing risk for the
development of new plants.

It is also necessary to consider cultural
change within the ecosystem to reap the
full iPLM solution benefits. iPLM represents
a fundamentally different way of
collaborating and closing the loop between
the various parts of the ship development
and operation lifecycle. As such, people
and processes must change to take advantage
of the tools and capabilities. Without
these changes, much of the benefits of an
iPLM solution could be lost.

Here are some sample cost and schedule
benefits from Navy shipbuilding implementations
of iPLM: reduction of documentation
errors, 15 percent; performance
to schedule increase, 25 percent; labor
cost reduction for engineering analysis,
50 percent; change process cost and time
reduction, 15 percent; and error correction
cost reduction during production, 15
percent.

Conclusions

An iPLM approach to design, construction,
operation and maintenance of a
commercial nuclear power plant – while
requiring reactor designers, engineering
companies, owner/operators, and regulators
to fundamentally change the way
they approach these projects – has been
shown in other industries to have substantial
benefits related to cost, schedule and
long-term operation and maintainability.

By developing and delivering to the customer
two plants: the physical plant and
the “digital plant,” substantial advantages
will accrue both during plant construction
and operation. Financial markets, shareholders,
regulators and the general public
will have more confidence in the development
and operation of these plants
through the predictability, performance to
schedule and cost and transparency that
an iPLM solution can help provide.

Online Transient Stability Controls

For the last few decades the growth of the world’s population and its corresponding increased demand for electrical energy has created a huge increase in the supply of electrical power. However, for logistical, environmental, political and social reasons, this power generation is rarely near its consumers, necessitating the growth of very large and complex transmission networks. The addition of variable wind energy in remote locations is only exacerbating the situation. In addition the transmission grid capacity has not kept pace with either generation capacity or consumption while at the same time being extremely vulnerable to potential large-scale outages due to outdated operational capabilities.

For example, today if a fault is detected in the transmission system, the only course is to shed both load and generation. This is often done without consideration for real-time consequences or alternative analysis. If not done rapidly, it can result in a widespread, cascading power system blackout. While it is necessary to remove factors that might lead to a large-scale blackout, restriction of power flow or other countermeasures against such a failure, may only achieve this by sacrificing economical operation. Thus, the flexible and economical operation of an electric power system may often be in conflict with the requirement for improved supply reliability and system stability.

Limits of Off-line Approaches

One approach to solving this problem involves stabilization systems that have been deployed for preventing generator step-out by controlling the generator acceleration through power shedding, in which some of the generators are shut off at the time of a power system fault.

In 1975, an off-line special protection system (SPS) for power flow monitoring was introduced to achieve the transient stability of the trunk power system and power source system after a network expansion in Japan. This system was initially of the type for which settings were determined in advance by manual calculations using transient stability simulation programs assuming many contingencies on typical power flow patterns.

This type of off-line solution has the following problems:

  • Planning, design, programming, implementation and operational tasks are laborious. A vast number of simulations are required to determine the setting tables and required countermeasures, such as generator shedding, whenever transmission lines are constructed;
  • It is not well suited to variable generations sources such as wind or photovoltaic farms;
  • It is not suitable for reuse and replication, incurring high maintenance costs; and
  • Excessive travel time and related labor expense is required for the engineer and field staff to maintain the units at numerous sites.

By contrast, an online TSC solution employs various sensors that are placed throughout the transmission network, substations and generation sources. These sensors are connected to regional computer systems via high speed communications to monitor, detect and execute contingencies on transients that may affect system stability. These systems in turn are connected to centralized computers which monitor the network of distributed computers, building and distributing contingencies based on historical and recent information. If a transient event occurs, the entire ecosystem responds within 150 ms to detect, analyze, determine the correct course of action, and execute the appropriate set of contingencies in order to preserve the stability of the power network.

In recent years, high performance computational servers have been developed and their costs have been reduced enough to use many of them in parallel and/or in a distributed computing architecture. This results in a system that not only provides a benefit in greatly increasing the availability and reliability of the power system, but in fact, can best optimize the throughput of the grid. Thus not only has system reliability improved or remained stable, but the network efficiency itself has increased without a significant investment in new transmission lines. This has resulted in more throughput within the transmission grid, without building new transmission lines.

Solution and Elements

In 1995, for the first time ever, an online TSC system was developed and introduced in Japan. This solution provided a system stabilization procedure required by the construction of the new 500kV trunk networks of Chubu Electric Power Co. (CEPCO) [1-4]. Figure 1 shows the configuration of the online TSC system. This system introduced a pre-processing online calculation in the TSC-P (parent) besides a fast, post-event control executed by the combination of TSC-C (child) and TSC-T (terminal). This online TSC system can be considered an example of a self-healing solution of a smart grid. As a result of periodic simulations using the online data in TSC-P, operators of energy management systems/supervisory control and data acquisition (EMS/ SCADA) are constantly made aware of stability margins for current power system situations.

Using the same online data, periodic calculations performed in the TSC-P can reflect power network situations and the proper countermeasures to mitigate transient system events. The TSC-P simulates transient stability dynamics on about 100 contingencies of the power systems for 500 kV, 275 kV and 154 kV transmission networks. The setting tables for required countermeasures, such as generator shedding, are periodically sent to the TSC-Cs located at main substations. The TSC-Ts located at generation stations, shed the generators when the actual fault occurs. The actual generator shedding by the combination of TSC-Cs and TSC-Ts is completed within 150 ms after the fault to maintain the system’s stability.

Customer Experiences and Benefits

Figure 2 shows the locations of online TSC systems and their coverage areas in CEPCO’s power network. There are two online TSC systems currently operating; namely, the trunk power TSC system, to protect the 500 kV trunk power system introduced in 1995, and the power source TSC system to protect the 154 kV to 275 kV power source systems around the generation stations.

Actual performance data have shown some significant benefits:

  • Total transfer capability (TTC) is improved through elimination of transient stability limitations. TTC is decided by the minimum value of limitations given by not only thermal limit of transmission lines but transient stability, frequency stability, and voltage stability. Transient stability limits often determines the TTC in the case of long transmission lines from generation plants. CEPCO was able to introduce high-efficiency, combined-cycle power plants without constructing new transmission lines. TTC was increased from 1,500 MW to 3,500 MW by introducing the on-line TSC solution.
  • Power shedding is optimized. Not only is the power flow of the transmission line on which a fault occurs assessed, but the effects of other power flows surrounding the fault point are included in the analysis to decide the precise stability limit. The online TSC system can also reflect the constraints and priorities of each generator to be shed. To ensure a smooth restoration after the fault, restart time of shut off generators, for instance, can also be included.
  • When constructing new transmission lines, numerous off-line studies assuming various power flow patterns are required to support off-line SPS. After introduction of the online TSC system, new construction of transmission lines was more efficient by changing the equipment database for the simulation in the TSC-P.

In 2003, this CEPCO system received the 44th Annual Edison Award from the Edison Electric Institute (EEI), recognizing CEPCO’s achievement with the world’s first application of this type of system, and the contribution of the system to efficient power management.

Today, benefits continue to accrue. A new TSC-P, which adopts the latest high-performance computation servers, is now under construction for operation in 2009 [3]. The new system will shorten the calculation interval from every five minutes to every 30 seconds in order to reflect power system situations as precisely as possible. This interval was determined by the analysis of various stability situations recorded by the current TSC-P over more than 10 years of operation.

Additionally, although the current TSC-P uses the same online data as used by EMS/ SCADA, it can control emergency actions against small signal instability by receiving phasor measurement unit (PMU) data to detect divergences of phasor angles and voltages among the main substations.

Summary

The online TSC system is expected to realize optimum stabilization control of recent complicated power system conditions by obtaining power system information online and carrying out stability calculations at specific intervals. The online TSC will thus help utilities achieve better returns on investment in new or renovated transmission lines, reducing outage time and enabling a more efficient smart grid.

References

  1. Ota, Kitayama, Ito, Fukushima, Omata, Morita and Y. Kokai, “Development of Transient Stability Control System (TSC System) Based on Online Stability Calculation”, IEEE Trans. on Power System, Vol. 11, No. 3, pp. 1463-1472, August 1996.
  2. Koaizawa, Nakane, Omata and Y. Kokai, “Acutual Operating Experience of Online Transient Stability Control System (TSC System), IEEE PES Winter Meeting, 2000, Vol. 1, pp 84-89.
  3. Takeuchi, Niwa, Nakane and T. Miura
    “Performance Evaluation of the Online Transient Stability Control System (Online TSC System)”, IEEE PES General Meeting , June 2006.
  4. Takeuchi, Sato, Nishiiri, Kajihara, Kokai and M. Yatsu, “Development of New Technologies and Functions for the Online TSC System”, IEEE PES General Meeting , June 2006.

Is Your Mobile Workforce Truly Optimized?

ClickSoftware is the leading provider of mobile workforce management and service optimization solutions that create business value for service operations through higher levels of productivity, customer satisfaction and cost effectiveness. Combining educational, implementation and support services with best practices and its industry leading solutions, ClickSoftware drives service decision making across all levels of the organization.

Our mobile workforce management solution helps utilities empower mobile workers with accurate, real-time information for optimum service and quick on-site decision making. From proactive customer demand forecasting and capacity planning to real-time decision making, incorporating scheduling, mobility and location- based services, ClickSoftware helps service organizations get the most out of their resources.

The IBM/ClickSoftware alliance provides the most comprehensive offering for Mobile Workforce and Asset Management powering the real-time service enterprise. Customers can benefit from maximized workforce productivity and customer satisfaction while controlling, and then minimizing, operational costs.

ClickSoftware provides a flexible, scalable and proven solution that has been deployed at many utility companies around the world. Highlights include the ability to:

  • Automatically update the schedule based on real-time information from the field;
  • Manage crews (parts and people);
  • Cover a wide variety of job types within one product: from short jobs requiring one person to multi-stage jobs requiring a multi-person team over several days or weeks;
  • Balance regulatory, environmental and union compliance;
  • Continuously strive to raise the bar in operational excellence;
  • Incorporate street-level routing into the decision making process; and
  • Plan for the catastrophic events and seasonal variability in field service operations.

The resulting value proposition to the customer is extremely compelling:

  • Typically, optimized scheduling and routing of the mobile workforce generates a 31 percent increase in jobs per day vs. the industry average. (Source: AFSMI survey, 2003)
  • A variety of solutions, ranging from entry level to advanced, that directly address the broad spectrum of pains experienced by service organizations around the world, including optimized scheduling, routing, mobile communications and integration of solutions components – within the service optimization solution itself and also into the CRM/ERP/ EAM back end.
  • An entry level offering with a staged upgrade path toward a fully automated service optimization solution ensures that risk is managed and the most challenging of customer requirements may be met. This “least risk” approach for the customer is delivered by a comprehensive set of IBM business consulting, installation and support services.
  • The industry-proven credibility of ClickSoftware’s ServiceOptimization Suite, combined with IBM’s wireless middleware, software, hardware and business consulting services provides the customer with the most effective platform for managing its field service operations.

ClickSoftware’s customers represent a cross-section of leaders in the utilities, telecommunications, computer and office equipment, home services and capital equipment industries. Over 100 customers around the world have deployed ClickSoftware workforce and service optimization solutions and services to achieve optimal levels of field service.

At Your Service

Today’s utility companies are being driven to upgrade their aging transmission and distribution networks in the face of escalating energy generation costs, serious environmental challenges and rising demand for cleaner, distributed generation from both developing and digital economies worldwide.

The current utilities environment requires companies to drive down costs while increasing their ability to monitor and control utility assets. Yet, due to aging infrastructure, many utilities operate without the benefit of real-time usage and distribution loads – while also contending with limited resources for repair and improvement. Even consumers, with climate change on their minds, are demanding that utilities find more innovative ways to help them reduce energy consumption and costs.

One of the key challenges facing the industry is how to take advantage of new technologies to better manage customer service delivery today and into the future. While introducing this new technology, utilities must keep data and networks secure to be in compliance with critical infrastructure protection regulations. The concept of “service management” for the smart grid provides an approach for getting started.

A Smart Grid

A smart grid is created with new solutions that enable new business models. It brings together processes, technology and business partners, empowering utilities with an IP-enabled, continuous sensing network that overlays and connects a utility’s equipment, devices, systems, customers, partners and employees. A smart grid also enables on-demand access to data and information, which is used to better manage, automate and optimize operations and processes throughout the utility.

A utility relies on numerous systems, which reside both within and outside their physical boundaries. Common internal systems include: energy trading systems (ETS), customer information systems (CIS), supervisory control and data acquisition systems (SCADA), outage management systems (OMS), enterprise asset management (EAM); mobile workforce management systems (MWFM), geospatial information systems (GIS) and enterprise resource planning systems (ERP).

These systems are purchased from multiple vendors and often use a variety of protocols to communicate. In addition, utilities must interface with external systems – and often integrate all of them using a point-to-point model and establish connectivity on an as-needed basis. The point-to-point approach can result in numerous complex connections that need to be maintained.

Service Management

The key concept behind service management is the idea of managing assets, networks and systems to provide a “service,” as opposed to simply operating the assets. For example, Rolls Royce Civil Aerospace division uses this concept to sell “pounds of thrust” as a service. Critical to a utility’s operation is the ability to manage all facets of the services being delivered. Also critical to the operation of the smart grid are new solutions in advanced meter management (AMM), network automation and analytics, and EAM, including meter asset management.

A service management platform provides a way for utility companies to manage the services they deliver with their enterprise and information technology assets. It provides a foundation for managing the assets, their configuration, and the interrelationships key to delivering services. It also provides a means of defining workflow for the instantiation and management of the services being delivered. Underlying this platform is a range of tools that can assist in management of the services.

Gathering and analyzing data from advanced meters, network components, distribution devices, and legacy SCADA systems provides a solid foundation for automating service management. When combined with the information available in their asset management systems, utility companies can streamline operations and make more efficient use of valuable resources.

Advanced Reading

AMM centers on a more global view of the informational infrastructure, examining how automatic meter reading (AMR) and advanced metering infrastructure (AMI) integrate with other information systems to provide value-added benefits. It is important to note that for many utilities, AMM is considered to be a “green” initiative since it has the ability to influence customer usage patterns and, therefore, lower peak demand.

The potential for true business transformation exists through AMM, and adopting this solution is the first stage in a utility’s transformation to a more information-powered business model. New smart meters are network addressable, and along with AMM, are core components of the grid. Smart meters and AMM provide the capability to automatically collect usage data in near real time and to transport meter reads at regular intervals or on demand.

AMR/AMIs that aggregate their data in collection servers or concentrators, and expose it through an interface, can be augmented with event management products to monitor the meter’s health and operational status. Many organizations already deploy these solutions for event management within a network’s operations center environments, and for consolidated operations management as a top-level “manager of managers.”

A smart grid includes many devices other than meters, so event management can also be used to monitor the health of the rest of the network and IT equipment in the utility infrastructure. Integrating meter data with operations events gives network operations center operators a much broader view of a utility’s distribution system.

These solutions enable end-to-end data integration, from the meter collection server in a substation to the back-end helpdesk and billing applications. This approach can lead to improved speed and accuracy of data, while leveraging existing equipment and applications.

Network Automation and Analytics

Most utility companies use SCADA systems to collect data from sensors on the energy grid and send events to applications with SCADA interfaces. These systems collect data from substations, power plants and other control centers. They then process the data and allow for control actions to be sent back out. Energy management and distribution management systems typically provide additional features on top of SCADA, targeting either the transmission or distribution grids.

SCADA systems are often distributed on several servers (anywhere from two to 100) connected via a redundant local area network. The SCADA system, in turn, communicates with remote terminal units (RTUs), other devices, and other computer networks. RTUs reside in a substation or power plant, and are hardwired to other devices to bring back meaningful information such as current megawatts, amps, volts, pressure, open/closed or tripped. Distribution business units within a utility company also utilize SCADA systems to track low voltage applications, such as meters and pole drops, compared to the transmission business units’ larger assets, including towers, circuits and switchgear.

To facilitate network automation, IT solutions can help utilities to monitor and analyze data from SCADA systems in real time, monitor the computer network systems used to deploy SCADA systems, and better secure the SCADA network and applications using authentication software. An important element of service management is the use of automation to perform a wide range of actions to improve workfl ow efficiency. Another key ingredient is the use of service level agreements (SLAs) to give a business context for IT, enabling greater accountability to business user needs, and improving a utility’s ability to prioritize and optimize.

A smart grid includes a large number of devices and meters – millions in a large utility – and these are critical to a utility’s operations. A combination of IT solutions can be deployed to manage events from SCADA devices, as well as the IT equipment they rely on.

EAM For Utilities

Historically, many utility companies have managed their assets in silos. However, the emergence of the smart grid and smart meters, challenges of an aging workforce, an ever-demanding regulatory environment, and the availability of common IT architecture standards, are making it critical to standardize on one asset management platform as new requirements to integrate physical assets and IT assets arise (see Figure 1).

Today, utility companies are using EAM to manage work in gas and electric distribution operations, including construction, inspections, leak management, vehicles and facilities. In transmission and substation, EAM software is used for preventative and corrective maintenance and inspections.

EAM also helps track financial assets such as purchasing, depreciation, asset valuation and replacement costs. This solution helps integrate this data with ERP systems, and stores the history of asset testing and maintenance management. It integrates with GIS or other mapping tools to create geographic and spatial views of all distribution and smart grid assets.

Meter asset management is another area of increasing interest, as meters have an asset lifecycle similar to most other assets in a utility. Meter asset management involves tracking the meter from receipt to storeroom, to truck, to final location – as compared to managing the data the meter produces.

Now there is an IT asset management solution with the ability to manage meters as part of the IT network. This solution can be used to provision the meter, track configurations and provide service desk functionality. IT asset management solutions also have the ability to update meter firmware, and easily move and track the location and status of the assets over time in conjunction with a configuration database.

Reducing the number of truck rolls is another key focus area for utility companies. Using a combination of solutions, companies can:

  • Better manage the lifecycles of physical assets such as meters, meter cell relays, and broadband over powerline (BPL) devices to improve preventive maintenance;
  • Reconcile deployed asset information with information collected by meter data management systems;
  • Correlate the knowledge of physical assets with problems experienced with the IT infrastructure to better analyze a problem for root cause; and
  • Establish more efficient business process workflows and strengthen governance across a company.

Utilities are facing many challenges today and taking advantage of new technologies that will help better manage the delivery of service to customers tomorrow. The deployment of the smart grid and related solutions is a significant initiative that will be driving utilities for the next 10 years or more.

The concept of “service management” for the smart grid provides an approach for getting started. But these do not need to be tackled all at once. Utilities should develop a roadmap for the smart grid; each one will depend on specific priorities. But utilities don’t have to go it alone. The smart grid maturity model (SGMM) can enable a utility to develop a roadmap of activities, investments and best practices to ensure success and progress with available resources.

The GridWise Olympic Peninsula Project

The Olympic Peninsula Project consisted of a field demonstration and test of advanced price signal-based control of distributed energy resources (DERs). Sponsored by the U.S. Department of Energy (DOE) and led by the Pacific Northwest National Laboratory, the project was part of the Pacific Northwest Grid- Wise Testbed Demonstration.

Other participating organizations included the Bonneville Power Administration, Public Utility District (PUD) #1 of Clallam County, the City of Port Angeles, Portland General Electric, IBM’s T.J. Watson Research Center, Whirlpool and Invensys Controls. The main objective of the project was to convert normally passive loads and idle distributed generation into actively participating resources optimally coordinated in near real time to reduce stress on the local distribution system.

Planning began in late 2004, and the bulk of the development work took place in 2005. By late 2005, equipment installations had begun, and by spring 2006, the experiment was fully operational, remaining so for one full year.

The motivating theme of the project was based on the GridWise concept that inserting intelligence into electric grid components at every point in the supply chain – from generation through end-use – will significantly improve both the electrical and economic efficiency of the power system. In this case, information technology and communications were used to create a real-time energy market system that could control demand response automation and distributed generation dispatch. Optimal use of the DER assets was achieved through the market, which was designed to manage the flow of power through a constrained distribution feeder circuit.

The project also illustrated the value of interoperability in several ways, as defined by the DOE’s GridWise Architecture Council (GWAC). First, a highly heterogeneous set of energy assets, associated automation controls and business processes was composed into a single solution integrating a purely economic or business function (the market-clearing system) with purely physical or operational functions (thermostatic control of space heating and water heating). This demonstrated interoperability at the technical and informational levels of the GWAC Interoperability Framework (www.gridwiseac.org/about/publications.aspx), providing an ideal example of a cyber-physical-business system. In addition, it represents an important class of solutions that will emerge as part of the transition to smart grids.

Second, the objectives of the various asset owners participating in the market were continuously balanced to maintain the optimal solution at any point in time. This included the residential demand response customers; the commercial and municipal entities with both demand response and distributed generation; and the utilities, which demonstrated interoperability at the organizational level of the framework.

PROJECT RESOURCES

The following energy assets were configured to respond to market price signals:

  • Residential demand response for electric space and water heating in 112 single-family homes using gateways connected by DSL or cable modem to provide two-way communication. The residential demand response system allowed the current market price of electricity to be presented to customers. Consumers could also configure their demand response automation preferences. The residential consumers were evenly divided among three contract types (fixed, time of use and real time) and a fourth control group. All electricity consumption was metered, but only the loads in price-responsive homes were controlled by the project (approximately 75 KW).
  • Two distributed generation units (175 KW and 600 KW) at a commercial site served the facility’s load when the feeder supply was not sufficient. These units were not connected in parallel to the grid, so they were bid into the market as a demand response asset equal to the total load of the facility (approximately 170 KW). When the bid was satisfied, the facility disconnected from the grid and shifted its load to the distributed generation units.
  • One distributed microturbine (30 KW) that was connected in parallel to the grid. This unit was bid into the market as a generation asset based on the actual fixed and variable expenses of running the unit.
  • Five 40-horsepower (HP) water pumps distributed between two municipal water-pumping stations (approximately 150 KW of total nameplate load). The demand response load from these pumps was incrementally bid into the market based on the water level in the pumped storage reservoir, effectively converting the top few feet of the reservoir capacity into a demand response asset on the electrical grid.

Monitoring was performed for all of these resources, and in cases of price-responsive contracts, automated control of demand response was also provided. All consumers who employed automated control were able to temporarily disable or override project control of their loads or generation units. In the residential realtime price demand response homes, consumers were given a simple configuration choice for their space heating and water heating that involved selecting an ideal set point and a degree of trade-off between comfort and price responsiveness.

For real-time price contracts, the space heater demand response involved automated bidding into the market by the space heating system. Since the programmable thermostats deployed in the project didn’t support real-time market bidding, IBM Research implemented virtual thermostats in software using an event-based distributed programming prototype called Internet- Scale Control Systems (iCS). The iCS prototype is designed to support distributed control applications that span virtually any underlying device or business process through the definition of software sensor, actuator and control objects connected by an asynchronous event programming model that can be deployed on a wide range of underlying communication and runtime environments. For this project, virtual thermostats were defined that conceptually wrapped the real thermostats and incorporated all of their functionality while at the same time providing the additional functionality needed to implement the real-time bidding. These virtual thermostats received
the actual temperature of the house as well as information about the real-time market average price and price distribution and the consumer’s preferences for set point and comfort/economy trade-off setting. This allowed the virtual thermostats to calculate the appropriate bid every five minutes based on the changing temperature and market price of energy.

The real-time market in the project was implemented as a shadow market – that is, rather than change the actual utility billing structure, the project implemented a parallel billing system and a real-time market. Consumers still received their normal utility bill each month, but in addition they received an online bill from the shadow market. This additional bill was paid from a debit account that used funds seeded by the project based on historical energy consumption information for the consumer.

The objective was to provide an economic incentive to consumers to be more price responsive. This was accomplished by allowing the consumers to keep the remaining balance in the debit account at the end of each quarter. Those consumers who were most responsive were estimated to receive about $150 at the end of the quarter.

The market in the project cleared every five minutes, having received demand response bids, distributed generation bids and a base supply bid based on the supply capacity and wholesale price of energy in the Mid-Columbia system operated by Bonneville Power Administration. (This was accomplished through a Dow Jones feed of the Mid-Columbia price and other information sources for capacity.) The market operation required project assets to submit bids every five minutes into the market, and then respond to the cleared price at the end of the five-minute market cycle. In the case of residential space heating in real-time price contract homes, the virtual thermostats adjusted the temperature set point every five minutes; however, in most cases the adjustment was negligible (for example, one-tenth of a degree) if the price was stable.

KEY FINDINGS

Distribution constraint management. As one of the primary objectives of the experiment, distribution constraint management was successfully accomplished. The distribution feeder-imported capacity was managed through demand response automation to a cap of 750 KW for all but one five-minute market cycle during the project year. In addition, distributed generation was dispatched as needed during the project, up to a peak of about 350 KW.

During one period of about 40 hours that took place from Oct. 30, 2006, to Nov. 1, 2006, the system successfully constrained the feeder import capacity at its limit and dispatched distributed generation several times, as shown in Figure 1. In this figure, actual demand under real-time price control is shown in red, while the blue line depicts what demand would have been without real-time price control. It should be noted that the red demand line steps up and down above the feeder capacity line several times during the event – this is the result of distributed generation units being dispatched and removed as their bid prices are met or not.

Market-based control demonstrated. The project controlled both heating and cooling loads, which showed a surprisingly significant shift in energy consumption. Space conditioning loads in real-time price contract homes demonstrated a significant shift to early morning hours – a shift that occurred during both constrained and unconstrained feeder conditions but was more pronounced during constrained periods. This is similar to what one would expect in preheating or precooling systems, but neither the real nor the virtual thermostats in the project had any explicit prediction capability. The analysis showed that the diurnal shape of the price curve itself caused the effect.

Peak load reduced. The project’s realtime price control system both deferred and shifted peak load very effectively. Unlike the time-of-use system, the realtime price control system operated at a fine level of precision, responding only when constraints were present and resulting in a precise and proportionally appropriate level of response. The time-of-use system, on the other hand, was much coarser in its response and responded regardless of conditions on the grid, since it was only responding to preconfiured time schedules or manually initiated critical peak price signals.

Internet-based control demonstrated. Bids and control of the distributed energy resources in the project were implemented over Internet connections. As an example, the residential thermostats modified their operation through a combination of local and central control communicated as asynchronous events over the Internet. Even in situations of intermittent communication failure, resources typically performed well in default mode until communications could be re-established. This example of the resilience of a well-designed, loosely coupled distributed control application schema is an important aspect of what the project demonstrated.

Distributed generation served as a valuable resource. The project was highly effective in using the distributed generation units, dispatching them many times over the duration of the experiment. Since the diesel generators were restricted by environmental licensing regulations to operate no more than 100 hours per year, the bid calculation factored in a sliding scale price premium such that bids would become higher as the cumulative runtime for the generators increased toward 100 hours.

CONCLUSION

The Olympic Peninsula Project was unique in many ways. It clearly demonstrated the value of the GridWise concepts of leveraging information technology and incorporating market constructs to manage distributed energy resources. Local marginal price signals as implemented through the market clearing process, and the overall event-based software integration framework successfully managed the bidding and dispatch of loads and balanced the issues of wholesale costs, distribution congestion and customer needs in a very natural fashion.

The final report (as well as background material) on the project is available at www.gridwise.pnl.gov. The report expands on the remarks in this article and provides detailed coverage of a number of important assertions supported by the project, including:

  • Market-based control was shown to be a viable and effective tool for managing price-based responses from single-family premises.
  • Peak load reduction was successfully accomplished.
  • Automation was extremely important in obtaining consistent responses from both supply and demand resources.
  • The project demonstrated that demand response programs could be designed by establishing debit account incentives without changing the actual energy prices offered by energy providers.

Although technological challenges were identified and noted, the project found no fundamental obstacles to implementing similar systems at a much larger scale. Thus, it’s hoped that an opportunity to do so will present itself at some point in the near future.

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

  1. Gartner’s 2006 CIO Agenda survey.
  2. 2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

Using Analytics for Better Mobile Technology Decisions

Mobile computing capabilities have been proven to drive business value by providing traveling executives, field workers and customer service personnel with real-time access to customer data. Better and more timely access to information shortens response times, improves accuracy and makes the workforce more productive.

However, although your organization may agree that technology can improve business processes, different stakeholders – IT management, financial and business leadership and operations personnel – often have different perspectives on the real costs and value of mobility. For example, operations wants tools that help employees work faster and focus more intently on the customer; finance wants the solution that costs the least amount this quarter; and IT wants to implement mobile projects that can succeed without draining resources from other initiatives.

It may not be obvious, but there are ways to achieve everyone’s goals. Analytics can help operations, finance and IT find common ground. When teams understand the data, they can understand the logic. And when they understand the logic they can support making the right decision.

EXPOSING THE FORMULA

Deploying mobile technology is a strategic initiative with far-reaching consequences for the health of an enterprise. In the midst of evaluating a mobile project, however, it’s easy to forget that the real goal of hardware-acquisition initiatives is to make the workforce more productive and improve both the top and bottom lines over the long term.

Most decision-analytics tools focus on up-front procurement questions alone, because the numbers seem straightforward and uncomplicated. But these analyses miss the point. The best analysis is one that can determine which of the solutions will provide the most advantages to the workforce at the lowest possible overall cost to the organization.

To achieve the best return on investment we must do more than recoup an out-of-pocket expense: Are customers better served? Are employees working better, faster, smarter? Though hard to quantify, these are the fundamental aspects that determine the return on investment (ROI) of technology.

It’s possible to build a vendor-neutral analysis to calculate the total cost of ownership (TCO) and ROI of mobile computers. Panasonic Computer Solutions Company, the manufacturer of Toughbook notebooks, enlisted the services of my analytics company, Serious Networks, Inc., to develop an unbiased TCO/ROI application to help companies make better decisions when purchasing mobile computers.

The Panasonic-sponsored operational analysis tool provides statistically valid answers by performing a simulation of the devices as they would be used and managed in the field, generating a model that compares the costs and benefits of multiple manufacturers’ laptops. Purchase cost, projected downtime, the range of wireless options, notebook features, support and other related costs are all incorporated into this analytic toolset.

Using over 100 unique simulations with actual customers, four key TCO/ROI questions emerged:

  • What will it cost to buy a proposed notebook solution?
  • What will it cost to own it over the life of the project?
  • What will it cost to deploy and decommission the units?
  • What value will be created for the organization?

MOVING BEYOND GUESSTIMATES – CONSIDERING COSTS AND VALUE OVER A LIFETIME

There is no such thing as an average company, so an honest analysis uses actual corporate data instead of industry averages. Just because a device is the right choice for one company does not make it the right choice for yours.

An effective simulation takes into account the cost of each competing device, the number of units and the rate of deployment. It calculates the cost of maintaining a solution and establishes the value of productive time using real loaded labor rates or revenue hours. It considers buy versus lease questions and can extrapolate how features will be used in the field.

As real-world data is entered, the software determines which mobile computing solution is most likely to help the company reach its goals. Managers can perform what-if analyses by adjusting assumptions and re-running the simulation. Within this framework, managers will build a business case that forecasts the costs of each mobile device against the benefits derived over time (see Figures 1 and 2).

MAKING INTANGIBLES TANGIBLE

The 90-minute analysis process is very granular. It’s based on the industry segment – because it simulates the tasks of the workforce – and compares up to 10 competing devices.

Once devices are selected, purchase or lease prices are entered, followed by value-added benefits like no-fault warranties and on-site support. Intangible factors favoring one vendor over another, such as incumbency, are added to the data set. The size and rate of the deployment, as well as details that determine the cost of preparing the units for the workforce, are also considered.

Next the analysis accounts for the likelihood and cost of failure, using your own experience as a baseline. Somewhat surprisingly, the impact of failure is given less weight than most outside observers would expect. Reliability is important, but it’s not the only or most important attribute.

What is given more weight are productivity and operational enhancements, which can have a significantly greater financial impact than reliability, because statistically employees will spend much more of their time working than dealing with equipment malfunctions.

A matrix of features and key workforce behaviors is developed to examine the relative importance of touch screens, wireless and GPS, as well as each computer vendor’s ability to provide those features as standard or extra-cost equipment. The features are rated for their time and motion impact on your organization, and an operations efficiency score is applied to imitate real-world results.

During the session, the workforce is described in detail, because this information directly affects the cost and benefit. To assess the value of a telephone lineman’s time, for example, the system must know the average number of daily service orders, the percentage of those service calls that require re-work and whether linemen are normally in the field five, six or seven days a week.

Once the data is collected and input it can be modified to provide instantaneous what-if, heads-up and break-even analyses reports – without interference from the vendor. The model is built in Microsoft Excel so that anyone can assess the credibility of the analysis and determine independently that there are no hidden calculations or unfair formulas skewing the results.

CONCLUSION

The Panasonic simulation tool can help different organizations within a company come to consensus before making a buying decision. Analytics help clarify whether a purpose-built rugged or business-rugged system or some other commercial notebook solution is really the right choice for minimizing the TCO and maximizing the ROI of workforce mobility.

ABOUT THE AUTHOR

Jason Buk is an operations director at Serious Networks, Inc., a Denver-based business analytics firm. Serious Networks uses honest forecasting and rigorous analysis to determine what resources are most likely to increase the effectiveness of the workforce, meet corporate goals and manage risk in the future.

Analyzing Substation Asset Health Information for Increased Reliability And Return on Investment

Asset Management, Substation Automation, AMI and Intelligent Grid Monitoring are common and growing investments for most utilities today. The foundation for effective execution of these initiatives is built upon the ability to efficiently collect, store, analyze and report information from the rapidly growing number of smart devices and business systems. Timely and automated access to this information is now more than ever helping drive the profitability and success of utilities. Most utilities have made significant investments in modern substation equipment but fail to continuously analyze and interpret the real-time health indicators of these assets. Continued investment in state-of-the-art operational assets will yield little return on investment unless the information can be harvested and interpreted in a meaningful way.

DATA CAPTURE AND PRESENTATION

InStep’s eDNA (Enterprise Distributed Network Architecture) software is used by many of the world’s leading utilities to collect, store, display and report on the operational and asset health-related information produced by their intelligent assets. eDNA is a highly scalable enterprise application specifically designed for integrating data from SCADA, IEDs, utility meters and other smart devices with the corporate enterprise. This provides centralized access to the real-time, historical and asset health related data that most applications throughout a utility depend upon for managing reliability and profitability.

A real-time historian is needed for collection, organization and reporting of the substation asset measurement data. Today, asset health monitoring is often not present or it is comprised of fixed alarm limits defined within the device or historian. Additionally, fixed end-of-life calculations are used for determining an asset’s health. It is a daunting task to identify and maintain fixed limits and calculations that can be variable based on the actual device characteristics, operating history, ambient conditions and device settings. As a result, the historian alone does not provide for a complete asset monitoring strategy.

ADVANCED ANALYTICS

InStep’s PRiSM software is a self-learning analytic application for monitoring the real-time health of critical assets in support of Condition Based Maintenance (CBM). PRiSM uses artificial intelligence and sophisticated data-mining techniques to determine when a piece of equipment is performing poorly or is likely to fail. The early identification of equipment problems leads to reduced maintenance costs and increased availability, reliability, production quality and capacity.

The software learns from an asset’s individual operating history and develops a series of operational profiles for each piece of equipment. These operational profi es are compared to an equipment’s real-time data to identify and predict failures before they occur. Alarms and Email notification are used to alert personnel of pending problems. PRiSM includes an advanced analysis application for identifying why an asset is not performing as expected.

TECHNOLOGY ADVANCEMENT

Utilities are rapidly replacing legacy devices and systems with modern technologies. These new systems are typically better instrumented to provide utilities with the information necessary to more effectively operate and better maintain their assets. The status of a breaker can be good information for determining the path of power fl ow but does not provide enough information to determine the health of the device or when it is likely to fail. Modern IEDs and utility meters support tens to hundreds of data points in a single device. This data is quite valuable and necessary in supporting a modern utility asset management program. Many common utility applications such as maintenance management, outage management, meter data management, capacity planning and other advanced analytical systems can be best leveraged when accurate high-resolution historical data is readily available. An intelligent condition monitoring analytical layer is needed for effective monitoring of such a large population of devices and sensors.

CONCLUSION

The need for efficient and effective data management is rapidly growing as utilities continue to update their assets and business systems. This is further driving the need for a highly scalable enterprise historian. The historian is expanding beyond the traditional role of supporting operations and is becoming a key application for effective asset management and overall business success. The historian alone does not provide for a robust real-time asset health monitoring strategy, but when combined with an advanced online condition monitoring application such as InStep’s PRiSM technology, significant savings and increased reliability can be achieved. InStep continues to play a key and growing role in supporting many of the most successful utilities in their operational, reliability and asset monitoring efforts.