Managing the Plant Data Lifecycle

Intelligent Plant Lifecycle Management
(iPLM) is the process of managing a
generation facility’s data and information
throughout its lifetime – from initial
design through to decommissioning. This
paper will look at results from the application
of this process in other industries
such as shipbuilding, and show how those
results are directly applicable to the
design, construction, operation and maintenance
of complex power generation
facilities, specifically nuclear and clean
coal plants.

In essence, iPLM can unlock substantial
business value by shortening plant development
times, and efficiently finding,
reusing and changing plant data. It also
enables an integrated and transparent
collaborative environment to manage
business processes.

Recent and substantial global focus on
greenhouse gas emissions, coupled with rising and volatile fossil fuel prices, rapid
economic growth in nuclear-friendly Asian
countries, and energy security concerns,
is driving a worldwide resurgence in commercial
nuclear power interest.

The power generation industry is
undergoing a global transformation that
is putting pressure on traditional methods
of operation, and opening the door to substantial
innovation. Due to factors such
as the transition to a carbon-constrained
world, which greatly affects a generation
company’s portfolio mix decisions, the
escalating constraints in the global supply
chain for raw materials and key plant components,
or the fuel price volatility and
security of supply concerns, generation
companies must make substantial investments
in an environment of increasing
uncertainty.

In particular, there is a renewed interest
globally in the development of new
nuclear power plants. Plants continue
to be built in parts of Asia and Central
Europe, while a resurgence of interest
is seen in North America and Europe.
Combined with the developing interest in
building clean coal facilities, the power
generation industry is facing a large
number of very complex development
projects.

A key constraint, however, being felt
worldwide is a severe and increasing
shortage of qualified technical personnel
to design, build and operate new generation
facilities. Additionally, as most of the
world’s existing nuclear fleet reaches the
end of its originally designed life span, relicensing
these nuclear plants to operate
another 10, 20, or even 30 years is taking
place globally.

Sowing Plant Information

iPLM can be thought of as lifecycle
management of information and data
about the plant assets (see Figure 1). It
also includes the use of this information
over the physical plant’s complete lifecycle
to minimize project and operational
risk, and optimize plant performance.

This information includes design
specifications, construction plans, component
and system operating instructions,
real-time and archived operating data,
as well as other information sources and
repositories. Traditionally, it has been difficult to manage all of this structured and
unstructured data in a consistent manner
across the plant lifecycle to create a single
version of the truth.

In addition, a traditional barrier has
existed between the engineering and
construction phases, and the operations
and maintenance phases (see Figure 2).
So even if the technical issues of interconnectivity
and data/information management
are resolved via an iPLM solution, it
is still imperative to change the business
processes associated with these domains
to take full advantage.

Benefits

iPLM combines benefits of a fully integrated
PLM environment with the connection
of an information repository and flow
of operational functions. These functions
include enterprise asset management
(EAM) systems. Specific iPLM benefits are:

  • Ability to accurately assess initial
    requirements before committing to
    capital equipment orders;
  • Efficient balance of owner requirements
    with best practices and regulatory compliance;
  • Performance design work and simulation
    as early as possible to ensure the
    plant can be built within schedule and
    budget;
  • Better project execution with real-time
    information that is updated automatically
    through links to business processes,
    tasks, documents, deliverables
    and other data sources;
  • Design and engineering multi-disciplinary
    components – from structure
    to electrical and fluid systems – to
    ensure the plant is built right the first
    time;
  • Ability to virtually plan how plants and
    structures will be constructed to minimize
    costly rework;
  • Optimization of operations and maintenance
    processes to reduce downtime
    and deliver long-term profits to the
    owners;
  • Ensuring compliance to regulatory and
    safety standards;
  • Maximizing design and knowledge
    reuse from one successful project to
    another;
  • Managing complexity, including sophisticated
    plant systems, and the interdependent
    work of engineering consultants,
    suppliers and the construction
    sites;
  • Visibility of evolving design and changing
    requirements to all stakeholders
    during new or retrofitting projects; and
  • Providing owners and operators a primary repository to all plant information
    and the processes that govern them
    throughout their lifecycle.

Benefits accrue at different times in the
plant lifecycle, and to different stakeholders.
They also depend heavily on the consistent
and dedicated implementation of
basic iPLM solution tenets.

Value Proposition

PLM solutions enable clients to optimize
the creation and management of complex
information assets over a projects’
complete lifecycle. Shipbuilding PLM, in
particular, offers an example similar to the
commercial nuclear energy generation
ecosystem. Defense applications, such as
nuclear destroyer and aircraft carrier platform
developments, are particularly good
examples.

A key aspect of the iPLM value proposition
is the seamless integration of data
and information throughout the design,
build, operate and maintain processes
for industrial plants. The iPLM concept is
well accepted by the commercial nuclear ecosystem. There is an understanding
by engineering companies, utilities and
regulators that information/data transparency,
information lifecycle management
and better communication throughout the
ecosystem is necessary to build timely,
cost effective, safe and publicly accepted
nuclear power plants.

iPLM leverages capabilities in PLM,
EAM and Electronic Content Management
(ECM), combined with data management/
integration, information lifecycle management,
business process transformation
and integration with other nuclear functional
applications through a Service Oriented
Architecture (SOA)-based platform.
iPLM can also provide a foundation on
which to drive high-performance computing
into commercial nuclear operations,
since simulation requires consistent valid,
accessible data sets to be effective.

A hallmark of the iPLM vision is that it
is an integrated solution in which information
related to the nuclear power plant
flows seamlessly across a complete and
lengthy lifecycle. There are a number of
related systems with which an iPLM solution
must integrate. Therefore, adherence
to industry standard interoperability and
data models is necessary for a robust
iPLM solution. An example of an appropriate
data model standard is known as ISO
15926, which has recently been developed
to facilitate data interoperability.

Combining EAM and PLM

Incorporating EAM with PLM is an
example of one of the key integrations
created by an iPLM solution. It provides
several benefits. This includes the basis
for a cradle-to-grave data and work
process repository for all information
applicable to a new nuclear power plant.
A single version of the truth becomes
available early in the project design, and
remains applicable in the construction,
start-up and test, and turnover phases of
the project.

Second, with the advent of single-stem
licensing in many parts of the world (consider
the COLA, or combined Construction
and Operating License Application
in the U.S.), licensing risk is considerably
reduced by consistent maintenance of plant information. Demonstrating that the
plant being started up is the same plant
that was designed and licensed becomes
more straightforward and transparent.

Third, using an EAM system during construction,
and incrementally incorporating
the deep functionality necessary for EAM
in the plant operations, can facilitate and
shorten the plant transfer period from the
designers and constructors to the owners
and operators.

Finally, the time and cost to build a new
plant is significant, and delay in connecting
the plant to the grid for the safe generation
of megawatts can easily cost millions
of dollars. The formidable challenges
of nuclear construction, however, may be
offset by an SOA-based integrated information
system, replacing the traditional
unique and custom designed applications.

To help address these challenges, the
power generation industry ecosystem –
including utilities, engineering companies,
reactor and plant designers, and regulators
– can benefit by looking at methodologies
and results from other industries that
have continued to design, build, operate
and maintain highly complex systems
throughout the last 10 to 20 years.

Here we examine what the shipbuilding
industry has done, results it achieved, and
where it is going.

Experiences In Shipbuilding

The shipbuilding industry has many
similarities to the development of a new
nuclear or clean coal plant. Both are very
complex, long lifecycle assets (35 to 70
years) which require precise and accurate
design, construction, operation and
maintenance to both fulfill their missions
and operate safely over their lifetimes. In
addition, the respective timeframe and
costs of designing and building these
assets (five to 10 years and $5 billion to
$10 billion) create daunting challenges
from a project management and control
point of view.

An example of a successful implementation
of an iPLM-like solution in the shipbuilding
industry is a project completed
for Northrop Grumman’s development of
the next generation of U.S. surface combat
ships, a four-year, $2.9 billion effort.
This was a highly complex, collaborative
project completed by IBM and Dassault
Systemes to design and construct a new
fleet of ships with a keen focus on supporting
efficient production, operation
and maintenance of the platform over its
expected lifecycle.

A key consideration in designing, constructing
and operating modern ships
is increasing complexity of the assets,
including advanced electronics, sensors
and communications. These additional
systems and requirements greatly multiply
the number of simultaneous constraints
that must be managed within the
design, considered during construction
and maintained and managed during
operations. This not only includes more
system complexity, but also adds to the
importance of effective collaboration, as
many different companies and stakeholders
must be involved in the ship’s overall
design and construction.

An iPLM system helps to enforce standardization,
enabling lean manufacturing
processes and enhancing producibility of
various plant modules. For information
technology architecture to continue to be
relevant over the ship’s lifecycle, it is paramount
that it be based on open standards
and adhere to the most modern software
and hardware architectural philosophies.

To provide substantive value, both for
cost and schedule, tools such as real-time
interference checking, advanced visualization,
early-validation and constructability
analysis are key aspects of an iPLM solution
in the ship’s early design cycle. For
instance, early visualization allows feedback
from construction, operations and
maintenance back into the design process
before it’s too late to inexpensively make
changes.

There are also iPLM solution benefits
for the development of future projects.
Knowledge reuse is essential for decreasing
costs and schedules for future units,
and for continuous improvement of
already built units. iPLM provides for
more predictable design and construction
schedules and costs, reducing risk for the
development of new plants.

It is also necessary to consider cultural
change within the ecosystem to reap the
full iPLM solution benefits. iPLM represents
a fundamentally different way of
collaborating and closing the loop between
the various parts of the ship development
and operation lifecycle. As such, people
and processes must change to take advantage
of the tools and capabilities. Without
these changes, much of the benefits of an
iPLM solution could be lost.

Here are some sample cost and schedule
benefits from Navy shipbuilding implementations
of iPLM: reduction of documentation
errors, 15 percent; performance
to schedule increase, 25 percent; labor
cost reduction for engineering analysis,
50 percent; change process cost and time
reduction, 15 percent; and error correction
cost reduction during production, 15
percent.

Conclusions

An iPLM approach to design, construction,
operation and maintenance of a
commercial nuclear power plant – while
requiring reactor designers, engineering
companies, owner/operators, and regulators
to fundamentally change the way
they approach these projects – has been
shown in other industries to have substantial
benefits related to cost, schedule and
long-term operation and maintainability.

By developing and delivering to the customer
two plants: the physical plant and
the “digital plant,” substantial advantages
will accrue both during plant construction
and operation. Financial markets, shareholders,
regulators and the general public
will have more confidence in the development
and operation of these plants
through the predictability, performance to
schedule and cost and transparency that
an iPLM solution can help provide.

Analyzing Substation Asset Health Information for Increased Reliability And Return on Investment

Asset Management, Substation Automation, AMI and Intelligent Grid Monitoring are common and growing investments for most utilities today. The foundation for effective execution of these initiatives is built upon the ability to efficiently collect, store, analyze and report information from the rapidly growing number of smart devices and business systems. Timely and automated access to this information is now more than ever helping drive the profitability and success of utilities. Most utilities have made significant investments in modern substation equipment but fail to continuously analyze and interpret the real-time health indicators of these assets. Continued investment in state-of-the-art operational assets will yield little return on investment unless the information can be harvested and interpreted in a meaningful way.

DATA CAPTURE AND PRESENTATION

InStep’s eDNA (Enterprise Distributed Network Architecture) software is used by many of the world’s leading utilities to collect, store, display and report on the operational and asset health-related information produced by their intelligent assets. eDNA is a highly scalable enterprise application specifically designed for integrating data from SCADA, IEDs, utility meters and other smart devices with the corporate enterprise. This provides centralized access to the real-time, historical and asset health related data that most applications throughout a utility depend upon for managing reliability and profitability.

A real-time historian is needed for collection, organization and reporting of the substation asset measurement data. Today, asset health monitoring is often not present or it is comprised of fixed alarm limits defined within the device or historian. Additionally, fixed end-of-life calculations are used for determining an asset’s health. It is a daunting task to identify and maintain fixed limits and calculations that can be variable based on the actual device characteristics, operating history, ambient conditions and device settings. As a result, the historian alone does not provide for a complete asset monitoring strategy.

ADVANCED ANALYTICS

InStep’s PRiSM software is a self-learning analytic application for monitoring the real-time health of critical assets in support of Condition Based Maintenance (CBM). PRiSM uses artificial intelligence and sophisticated data-mining techniques to determine when a piece of equipment is performing poorly or is likely to fail. The early identification of equipment problems leads to reduced maintenance costs and increased availability, reliability, production quality and capacity.

The software learns from an asset’s individual operating history and develops a series of operational profiles for each piece of equipment. These operational profi es are compared to an equipment’s real-time data to identify and predict failures before they occur. Alarms and Email notification are used to alert personnel of pending problems. PRiSM includes an advanced analysis application for identifying why an asset is not performing as expected.

TECHNOLOGY ADVANCEMENT

Utilities are rapidly replacing legacy devices and systems with modern technologies. These new systems are typically better instrumented to provide utilities with the information necessary to more effectively operate and better maintain their assets. The status of a breaker can be good information for determining the path of power fl ow but does not provide enough information to determine the health of the device or when it is likely to fail. Modern IEDs and utility meters support tens to hundreds of data points in a single device. This data is quite valuable and necessary in supporting a modern utility asset management program. Many common utility applications such as maintenance management, outage management, meter data management, capacity planning and other advanced analytical systems can be best leveraged when accurate high-resolution historical data is readily available. An intelligent condition monitoring analytical layer is needed for effective monitoring of such a large population of devices and sensors.

CONCLUSION

The need for efficient and effective data management is rapidly growing as utilities continue to update their assets and business systems. This is further driving the need for a highly scalable enterprise historian. The historian is expanding beyond the traditional role of supporting operations and is becoming a key application for effective asset management and overall business success. The historian alone does not provide for a robust real-time asset health monitoring strategy, but when combined with an advanced online condition monitoring application such as InStep’s PRiSM technology, significant savings and increased reliability can be achieved. InStep continues to play a key and growing role in supporting many of the most successful utilities in their operational, reliability and asset monitoring efforts.

Is Your Mobile Workforce Truly Optimized?

ClickSoftware is the leading provider of mobile workforce management and service optimization solutions that create business value for service operations through higher levels of productivity, customer satisfaction and cost effectiveness. Combining educational, implementation and support services with best practices and its industry leading solutions, ClickSoftware drives service decision making across all levels of the organization.

Our mobile workforce management solution helps utilities empower mobile workers with accurate, real-time information for optimum service and quick on-site decision making. From proactive customer demand forecasting and capacity planning to real-time decision-making, incorporating scheduling, mobility and location-based services, ClickSoftware helps service organizations get the most out of their resources.

The IBM-ClickSoftware alliance provides the most comprehensive offering for Mobile Workforce and Asset Management powering the real-time service enterprise. Customers can benefit from maximized workforce productivity and customer satisfaction while controlling, and then minimizing, operational costs.

ClickSoftware provides a flexible, scalable and proven solution that has been deployed at many utility companies around the world. Highlights include the ability to:

  • Automatically update the schedule based on real-time information from the field;
  • Manage crews (parts and people);
  • Cover a wide variety of job types within one product – from short jobs requiring one person to multistage jobs needing a multi-person team over several days or weeks;
  • Balance regulatory, environmental and union compliance;
  • Continuously strive to raise the bar in operational excellence;
  • Incorporate street-level routing into the decision-making process; and
  • Plan for the catastrophic events and seasonal variability in field service operations.

The resulting value proposition to the customer is extremely compelling:

  • Typically, optimized scheduling and routing of the mobile workforce generates a 31 percent increase in jobs per day versus the industry average (Source: AFSMI survey 2003).
  • A variety of solutions, ranging from entry level to advanced, directly address the broad spectrum of pains experienced by service organizations around the world, including optimized scheduling, routing, mobile communications and integration of solutions components – within the service optimization solution itself and also into the CRM/ERP/EAM back end.
  • An entry level offering with a staged upgrade path toward a fully automated service optimization solution ensures that risk is managed and the most challenging of customer requirements may be met. This "least risk" approach for the customer is delivered by a comprehensive set of IBM business consulting, installation and support services.
  • The industry-proven credibility of ClickSoftware’s ServiceOptimization Suite, combined with IBM’s wireless middleware, software, hardware and business consulting services, provides the customer with the most effective platform for managing field service operations.

ClickSoftware’s customers represent a cross section of leaders in the utilities, telecommunications, computer and office equipment, home services, and capital equipment industries. Close to 100 customers around the world have employed ClickSoftware service optimization solutions and services to achieve optimal levels of field service.

To find out more visit www.clicksoftware.com or call 888.438.3308.

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

  1. Gartner’s 2006 CIO Agenda survey.
  2. 2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.