Future of Learning

The nuclear power industry is facing significant employee turnover, which may be exacerbated by the need to staff new nuclear units. To maintain a highly skilled workforce to safely operate U.S. nuclear plants, the industry must find ways to expedite training and qualification, enhance knowledge transfer to the next generation of workers, and develop leadership talent to achieve excellent organizational effectiveness.

Faced with these challenges, the Institute of Nuclear Power Operations (INPO), the organization charged with promoting safety and reliability across the 65 nuclear electric generation plants operating in the U.S., created a “Future of Learning” initiative. It identified ways the industry can maintain the same high standard of excellence and record of nuclear safety, while accelerating training development, individual competencies and plant training operations.

The nuclear power industry is facing the perfect storm. Like much of the industrialized world, it must address issues associated with an aging workforce since many of its skilled workers and nuclear engineering professionals are hitting retirement age, moving out of the industry and beginning other pursuits.

Second, as baby boomers transition out of the workforce, they will be replaced by an influx of Generation Y workers. Many workers in this “millenials” generation are not aware of the heritage driving the single-minded focus on safety. They are asking for new learning models, utilizing the technologies which are so much a part of their lives.

Third, even as this big crew change takes place, there is increasing demand for electricity. Many are turning to cleaner technologies – solar, wind, and nuclear – to close the gap. And there is resurgence in requests for building new nuclear plants, or adding new reactors at existing plants. This nuclear renaissance also requires training and preparation to take on the task of safely and reliably operating our nuclear power plants.

It is estimated there will be an influx of 25,000 new workers in the industry over the next five years, with an additional 7,000 new workers needed if just a third of the new plants are built. Given that incoming workers are more comfortable using technology for learning, and that delivery models that include a blend of classroom-based, instructor-led, and Web-based methods can be more effective and efficient, the industry is exploring new models and a new mix of training.

INPO was created by the nuclear industry in 1979 following the Three Mile Island accident. It has 350 full-time and loaned employees. As a nonprofit organization, it is chartered to promote the highest levels of safety and reliability – in essence, to promote excellence – in the operation of nuclear electric generating plants. All U.S. nuclear operating companies are members.

INPO’s responsibilities include evaluating member nuclear site operations, accrediting each site’s nuclear training programs and providing assistance and information exchange. It has established the National Academy for Nuclear Training, and an independent National Nuclear Accrediting Board. INPO sends teams to sites to evaluate their respective training activities, and each station is reviewed at least every four years by the accrediting board.

INPO has developed guidelines for 12 specifically accredited programs (six operations and six maintenance/technical), including accreditation objectives and criteria. It also offers courses and seminars on leadership, where more than 1,500 individuals participate annually, from supervisors to board members. Lastly, it operates NANTeL (National Academy for Nuclear Training e-Learning system) with 200 courses for general employee training for nuclear access. More than 80,000 nuclear workers and sub-contractors have completed training over the Web.

The Future of Learning

In 2008, to systematically address workforce and training challenges, the INPO Future of Learning team partnered with IBM Workforce and Learning Solutions to conduct more than 65 one-on-one interviews, with chief executive officers, chief nuclear officers, senior vice presidents, plant managers, plant training managers and other leaders in the extended industry community. The team also completed 46 interviews with plant staff during a series of visits to three nuclear power plants. Lastly, the team developed and distributed a survey that was sent to training managers at the 65 nuclear plants, achieving a 62 percent response rate.

These are statements the team heard:

  • “Need to standardize a lot of the training, deliver it remotely, preferably to a desktop, minimize the ‘You train in our classroom in our timeframe’ and have it delivered more autonomously so it’s likely more compatible with their lifestyles.”
  • “We’re extremely inefficient today in how we design/develop and administer training. We don’t want to carry inefficiencies that we have today into the future.”
  • “Right now, in all training programs, it’s a one-size-fits-all model that’s not customized to an individual’s background. Distance learning would enable this by allowing people to demonstrate knowledge and let some people move at a faster pace.”
  • “We need to have ‘real’ e-learning. We’ve been exposed to less than adequate, older models of e-learning. We need to move away from ‘page turners’ and onto quality content.”

Several recommendations were generated as a result of the study. The first focused on ways to improve INPO’s current training offerings by adding leadership development courses, ratcheting up the interactivity of the Web-based and e-learning offerings in NANTeL and developing a “nuclear citizenship” course for new workers in the industry.

Second, there were recommendations about better utilizing training resources across the industry by centralizing common training, beginning with instructor training and certification and generic fundamentals courses. It was estimated that 50 percent of the accredited training materials are common across the industry. To accomplish this objective, INPO is exploring an industry infrastructure that would enable centralized training material development, maintenance and delivery.

The last set of recommendations focused on methods for better coordination and efficiency of training, including developing processes for certifying vendor training programs, and providing a jump-start to common community college and university curriculum.

In 2009, INPO is piloting a series of Future of Learning initiatives which will help determine the feasibility, cost-effectiveness, readiness and acceptance of this first set of recommendations. It is starting to look more broadly at ways it can utilize learning technology to drive economies of scale, accelerative and prescriptive learning, and deliver value to the nuclear electric generation industry.

Where Do We Go From Here ?

Beyond the initial perfect storm is another set of factors driving the future of learning.

First, consider the need for speed. It has been said that “If you are not learning at the speed of change, you are falling behind.”

In his “25 Lessons from Jack Welch,” the former CEO of General Electric said, “The desire, and the ability, of an organization to continuously learn from any source, anywhere – and to rapidly convert this learning into action – is its ultimate competitive advantage.” Giving individuals, teams and organizations the tools and technologies to accelerate and broaden their learning is an important part of the future of learning.

Second, consider the information explosion – the sheer volume of information available, the convenience of information access (due, in large part, to continuing developments in technology) and the diversity of information available. When there is too much information to digest, a person is unable to locate and make use of the information that one needs. When one is unable to process the sheer volume of information, overload occurs. The future of learning should enable the learner to sort through information and find knowledge.

Third, consider new developments in technology. Generations X and Y are considered “digital natives.” They expect that the most current technologies are available to them – including social networking, blogging, wikis, immersive learning and gaming – and to not have them is unthinkable.

Impact of New Technology

Philosophy of training has morphed from “just-in-case” (teach them everything and hope they will remember when they need it), to “just-in-time” (provide access to training just before the point of need), to “just-for-me.” With respect to the latter, learning is presented in a preferred media, with a learning path customized to reflect the student’s preferred learning style, and personalized to address the current and desired level of expertise within any given time constraint.

Imagine a scenario in which a maintenance technician at a nuclear plant has to replace a specialized valve – something she either hasn’t done for awhile, or hasn’t replaced before. In a Web 2.0 world, she should be able to run a query on her iPhone or similar handheld device and pull up the maintenance of that particular valve, access the maintenance records, view a video of the approved replacement procedure, or access an expert who could coach her through the process.

Learning Devices

What needs to be in place to enable this vision of the future of learning? First, workers will need a device that can access the information by connecting over a secure wireless network inside the plant. Second, the learning has to be available in small chunks – learning nuggets or learning assets. Third, the learning needs to be assembled along the dimensions of learning style, desired and target level of expertise, time available and media type, among other factors. Finally, experts need to be identified, tagged to particular tasks and activities, and made accessible.

Fortunately, some of the same learning technology tools that will enable centralized maintenance and accelerated development will also facilitate personalized learning. When training is organized at a more granular level – the learning asset level – not only can it be leveraged over a variety of courses and courseware, it can also be re-assembled and ported to a variety of outputs such as lesson books, e-learning and m-learning (mobile-learning).

The example above pointed out another shift in our thinking about learning. Traditionally, our paradigm has been that learning occurs in a classroom, and when it occurs, it has taken the form of a course. In the example above, the learning takes place anywhere and anytime, moving from the formal classroom environment to an informal environment. Of course, just because learning is “informal” does not mean it is accidental, or that it occurs without preparation.

Some estimates claim 10 percent of our learning is achieved through formal channels, 20 percent from coaching, and 70 percent through informal means. Peter Henschel, former director of the Institute for Research on Learning, raised an important question: If nearly three-quarters of learning in corporations is informal, can we afford to leave it to chance?

There are still several open issues regarding informal learning:

  • How do we evaluate the impact/effectiveness of informal learning? (Informal learning, but formal demonstration of competency/proficiency);
  • How do we record one’s participation and skill-level progression in informal learning? (Information learning, but formal recording of learning completion);
  • Who will create and maintain informal learning assets? (Informal learning, but formal maintenance and quality assurance of the learning content); and
  • When does informal learning need a formal owner (in a full- or part-time role)? (Informal learning, but will need formal policies to help drive and manage).
    • In the nuclear industry, accurate and up-to-date documentation is a necessity. As the nuclear industry moves toward more effective use of informal channels of learning, it will need to address these issues.

      Immersive Learning (Or Virtual Worlds)

      The final frontier for the future of learning is expansion into virtual worlds, also known as immersive learning. Although Second Life (SL) is the best known virtual world, there are also emerging competitors, including Active Worlds, Forterra (OLIVE), Qwag and Unisfair.

      Created in 2003 by Linden Lab of San Francisco, SL is a three-dimensional, virtual world that allows users to buy “property,” create objects and buildings and interact with other users. Unlike a game with rules and goals, SL offers an open-ended platform where users can shape their own environment. In this world, avatars do many of the same things real people do: work, shop, go to school, socialize with friends and attend rock concerts.

      From a pragmatic perspective, working in an immersive learning environment such as a virtual world provides several benefits that make it an effective alternative to real life:

      • Movement in 3-D space. A virtual world could be useful in any learning situation involving movement, danger, tactics, or quick physical decisions, such as emergency response.
      • Engendering Empathy. Participants experience scenarios from another person’s perspective. For example, the Future of Learning team is exploring ways to re-create the control room experience during the Three-Mile Island incident, to provide a cathartic experience for the next generation workforce so they can better appreciate the importance of safety and human performance factors.
      • Rapid Prototyping and Co-Design. A virtual world is an inexpensive environment for quickly mocking up prototypes of tools or equipment.
      • Role Playing. By conducting role plays in realistic settings, instructors and learners can take on various avatars and play those characters.
      • Alternate Means of Online Interaction. Although users would likely not choose a virtual world as their primary online communication tool, it provides an alternative means of indicating presence and allowing interaction. Users can have conversations, share note cards, and give presentations. In some cases, SL might be ideal as a remote classroom or meeting place to engage across geographies and utility boundaries.

      Robert Amme, a physicist at the University of Denver, has another laboratory in SL. Funded by a grant from the Nuclear Regulatory Commission, his team is building a virtual nuclear reactor to help train the next generation of environmental engineers on how to deal with nuclear waste (see Figure 1). The INPO Future of Learning team is exploring ways to leverage this type of learning asset as part of the nuclear citizenship initiative.

      There is no doubt that nuclear power generation is once again on an upswing, but critical to its revival and longevity will be the manner in which we prepare the current and next generation of workers to become outstanding stewards of a safe, effective, clean-energy future.

Modeling Distribution Demand Reduction

In the past, distribution demand reduction was a technique used only in emergency situations a few times a year – if that. It was an all-or-nothing capability that you turned on, and hoped for the best until the emergency was over. Few utilities could measure the effectiveness, let alone the potential of any solutions that were devised.

Now, demand reduction is evolving to better support the distribution network during typical peaking events, rather than just emergencies. However, in this mode, it is important not only to understand the solution’s effectiveness, but to be able to treat it like any other dispatchable load-shaping resource. Advanced modeling techniques and capabilities are allowing utilities to do just that. This paper outlines various methods and tools that allow utilities to model distribution demand reduction capabilities within set time periods, or even in near real time.

Electricity demand continues to outpace the ability to build new generation and apply the necessary infrastructure needed to meet the ever-growing, demand-side increases dictated by population growth and smart residences across the globe. In most parts of the world, electrical energy is one of the most important characteristics of a modern civilization. It helps produce our food, keeps us comfortable, and provides lighting, security, information and entertainment. In short, it is a part of almost every facet of life, and without electrical energy, the modern interconnected world as we know it would cease to exist.

Every country has one or more initiatives underway, or in planning, to deal with some aspect of generation and storage, delivery or consumption issues. Additionally, greenhouse gases (GHG) and carbon emissions need to be tightly controlled and monitored. This must be carefully balanced with expectations from financial markets that utilities deliver balanced and secure investment portfolios by demonstrating fiduciary responsibility to sustain revenue projections and measured growth.

The architects of today’s electric grid probably never envisioned the day when electric utility organizations would purposefully take measures to reduce the load on the network, deal with highly variable localized generation and reverse power flows, or anticipate a regulatory climate that impacts the decisions for these measures. They designed the electric transmission and distribution systems to be robust, flexible and resilient.

When first conceived, the electric grid was far from stable and resilient. It took growth, prudence and planning to continue the expansion of the electric distribution system. This grid was made up of a limited number of real power and reactive power devices that responded to occasional changes in power flow and demand. However, it was also designed in a world with far fewer people, with a virtually unlimited source of power, and without much concern or knowledge of the environmental effects that energy production and consumption entail.

To effectively mitigate these complex issues, a new type of electric utility business model must be considered. It must rapidly adapt to ever-changing demands in terms of generation, consumption, environmental and societal benefits. A grid made up of many intelligent and active devices that can manage consumption from both the consumer and utility side of the meter must be developed. This new business model will utilize demand management as a key element to the operation of the utility, while at the same time driving the consumer spending behavior.

To that end, a holistic model is needed that understands all aspects of the energy value chain across generation, delivery and consumption, and can optimize the solution in real time. While a unifying model may still be a number of years away, a lot can be gained today from modeling and visualizing the distribution network to gauge the effect that demand reduction can – and does – play in near real time. To that end, the following solutions are surely well considered.

Advanced Feeder Modeling

First, a utility needs to understand in more detail how its distribution network behaves. When distribution networks were conceived, they were designed primarily with sources (the head of the feeder and substation) and sinks (the consumers or load) spread out along the distribution network. Power flows were assumed to be one direction only, and the feeders were modeled for the largest peak level.

Voltage and volt-ampere reactive power (VAR) management were generally considered for loss optimization and not load reduction. There was never any thought given to limiting power to segments of the network or distributed storage or generation, all of which could dramatically affect the flow of the network, even causing reverse flows at times. Sensors to measure voltage and current were applied at the head of the feeder and at a few critical points (mostly in historical problem areas.)

Planning feeders at most utilities is an exercise performed when large changes are anticipated (i.e., a new subdivision or major customer) or on a periodic basis, usually every three to five years. Loads were traditionally well understood with predictable variability, so this type of approach worked reasonably well. The utility also was in control of all generation sources on the network (i.e., peakers), and when there was a need for demand reduction, it was controlled by the utility, usually only during critical periods.

Today’s feeders are much more complex, and are being significantly influenced by both generation and demand from entities outside the control of the utility. Even within the utility, various seemingly disparate groups will, at times, attempt to alter power flows along the network. The simple model of worst-case peaking on a feeder is not sufficient to understand the modern distribution network.

The following factors must be considered in the planning model:

  • Various demand-reduction techniques, when and where they are applied and the potential load they may affect;
  • Use of voltage reduction as a load-shedding technique, and where it will most likely yield significant results (i.e., resistive load);
  • Location, size and capacity of storage;
  • Location, size and type of renewable generation systems;
  • Use and location of plug-in electrical vehicles;
  • Standby generation that can be fed into the network;
  • Various social ecosystems and their characteristics to influence load; and
  • Location and types of sensors available.

Generally, feeders are modeled as a single unit with their power characteristic derived from the maximum peaking load and connected kilovolt-amperage (KVA) of downstream transformers. A more advanced model treats the feeder as a series of connected segments. The segment definitions can be arbitrary, but are generally chosen where the utility will want to understand and potentially control these segments differently than others. This may be influenced by voltage regulation, load curtailment, stability issues, distributed generation sources, storage, or other unique characteristics that differ from one segment to the next.

The following serves as an advanced means to model the electrical distribution feeder networks. It provides for segmentation and sensor placement in the absence of a complete network and historical usage model. The modeling combines traditional electrical engineering and power-flow modeling with tools such as CYME and non-traditional approaches using geospatial and statistical analysis.

The model builds upon information such as usage data, network diagrams, device characteristics and existing sensors. It then adds elements that could present a discrepancy with the known model such as social behavior, demand-side programs, and future grid operations based on both spatio-temporal and statistical modeling. Finally, suggestions can be made about sensors’ placement and characteristics to the network to support system monitoring once in place.

Generally, a utility would take a more simplistic view of the problem. It would start by directly applying statistical analysis and stochastic modeling across the grid to develop a generic methodology for selecting the number of sensors, and where to place them based on sensor accuracy, cost and risk-of-error introduction from basic modeling assumptions (load allocation, timing of peak demand, and other influences on error.) However, doing so would limit the utility, dealing only with the data it has in an environment that will be changing dramatically.

The recommended and preferred approach performs some analysis to determine what the potential error sources are, which source is material to the sensor question, and which could influence the system’s power flows. Next, an attempt can be made to geographically characterize where on the grid these influences are most significant. Then, a statistical approach can be applied to develop a model for setting the number, type and location of additional sensors. Lastly sensor density and placement can be addressed.

Feeder Modeling Technique

Feeder conditioning is important to minimize the losses, especially when the utility wants to moderate voltage levels as a load modification method. Without proper feeder conditioning and sufficient sensors to monitor the network, the utility is at risk of either violating regulatory voltage levels, or potentially limiting its ability to reduce the optimal load amount from the system during voltage reduction operations.

Traditionally, feeder modeling is a planning activity that is done at periodic (for example, yearly) intervals or during an expected change in usage. Tools such as CYME – CYMDIST provide feeder analysis using:

  • Balanced and unbalanced voltage drop analysis (radial, looped or meshed);
  • Optimal capacitor placement and sizing to minimize losses and/or improve voltage profile;
  • Load balancing to minimize losses;
  • Load allocation/estimation using customer consumption data (kWh), distribution transformer size (connected kVA), real consumption (kVA or kW) or the REA method. The algorithm treats multiple metering units as fixed demands; and large metered customers as fixed load;
  • Flexible load models for uniformly distributed loads and spot loads featuring independent load mix for each section of circuit;
  • Load growth studies for multiple years; and
  • Distributed generation.

However, in many cases, much of the information required to run an accurate model is not available. This is either because the data does not exist, the feeder usage paradigm may be changing, the sampling period does not represent a true usage of the network, the network usage may undergo significant changes, or other non-electrical characteristics.

This represents a bit of a chicken-or-egg problem. A utility needs to condition its feeders to change the operational paradigm, but it also needs operational information to make decisions on where and how to change the network. The solution is a combination of using existing known usage and network data, and combining it with other forms of modeling and approximation to build the best future network model possible.

Therefore, this exercise refines traditional modeling with three additional techniques: geospatial analysis; statistical modeling; and sensor selection and placement for accuracy.

If a distribution management system (DMS) will be deployed, or is being considered, its modeling capability may be used as an additional basis and refinement employing simulated and derived data from the above techniques. Lastly, if high accuracy is required and time allows, a limited number of feeder segments can be deployed and monitored to validate the various modeling theories prior to full deployment.

The overall goals for using this type of technique are:

  • Limit customer over or under voltage;
  • Maximize returned megawatts in the system in load reduction modes;
  • Optimize the effectiveness of the DMS and its models;
  • Minimize cost of additional sensors to only areas that will return the most value;
  • Develop automated operational scenarios, test and validation prior to system-wide implementation; and
  • Provide a foundation for additional network automation capabilities.

The first step starts by setting up a short period of time to thoroughly vet possible influences on the number, spacing and value offered by additional sensors on the distribution grid. This involves understanding and obtaining information that will most influence the model, and therefore, the use of sensors. Information could include historical load data, distribution network characteristics, transformer name plate loading, customer survey data, weather data and other related information.

The second step is the application of geospatial analysis to identify areas of the grid most likely to have influences driving a need for additional sensors. It is important to recognize that within this step is a need to correlate those influential geospatial parameters with load profiles of various residential and commercial customer types. This step represents an improvement over simply applying the same statistical analysis generically over the entirety of the grid, allowing for two or more “grades” of feeder segment characteristics for which different sensor standards would be developed.

The third step is the statistical analysis and stochastic modeling to develop recommended standards and methodology for determining sensor placement based on the characteristic segments developed from the geospatial assessment. Items set aside as not material for sensor placement serve as a necessary input to the coming “predictive model” exercise.

Lastly, a traditional electrical and accuracy- based analysis is used to model the exact number and placement of additional sensors to support the derived models and planned usage of the system for all scenarios depicted in the model – not just summertime peaking.

Conclusion

The modern distribution network built for the smart grid will need to undergo significantly more detailed planning and modeling than a traditional network. No one tool is suited to the task, and it will take multiple disciplines and techniques to derive the most benefit from the modeling exercise. However, if a utility embraces the techniques described within this paper, it will not only have a better understanding of how its networks perform in various smart grid scenarios, but it will be better positioned to fully optimize its networks for load and loss optimization.

Silver Spring Networks

When engineers built the national electric grid, their achievement made every other innovation built on or run by electricity possible – from the car and airplane to the radio, television, computer and the Internet. Over decades, all of these inventions have gotten better, smarter and cheaper while the grid has remained exactly the same. As a result, our electrical grid is operating under tremendous stress. The Department of Energy estimates that by 2030, demand for power will outpace supply by 30 percent. And this increasing demand for low-cost, reliable power must be met alongside growing environmental concerns.

Silver Spring Networks (SSN) is the first proven technology to enable the smart grid. SSN is a complete smart grid solutions company that enables utilities to achieve operational efficiencies, reduce carbon emissions and offer their customers new ways to monitor and manage their energy consumption. SSN provides hardware, software and services that allow utilities to deploy and run unlimited advanced applications, including smart metering, demand response, distribution automation and distributed generation, over a single, unified network.

The smart grid should operate like the Internet for energy, without proprietary networks built around a single application or device. In the same way that one can plug any laptop or device into the Internet, regardless of its manufacturer, utilities should be able to “plug in” any application or consumer device to the smart grid. SSN’s Smart Energy Network is based on open, Internet Protocol (IP) standards, allowing for continuous, two-way communication between the utility and every device on the grid – now and in the future.

The IP networking standard adopted by Federal agencies has proven secure and reliable over decades of use in the information technology and finance industries. This network provides a high-bandwidth, low-latency and cost-effective solution for utility companies.

SSN’s Infrastructure Cards (NICs) are installed in “smart” devices, like smart meters at the consumer’s home, allowing them to communicate with SSN’s access points. Each access point communicates with networked devices over a radius of one or two miles, creating a wireless communication mesh that connects every device on the grid to one another and to the utility’s back office.

Using the Smart Energy Network, utilities will be able to remotely connect or disconnect service, send pricing information to customers who can understand how much their energy is costing in real time, and manage the integration of intermittent renewable energy sources like solar panels, plug-in electric vehicles and wind farms.

In addition to providing The Smart Energy Network and the software/firmware that makes it run smoothly, SSN develops applications like outage detection and restoration, and provides support services to their utility customers. By minimizing or eliminating interruptions, the self-healing grid could save industrial and residential consumers over $100 billion per year.

Founded in 2002 and headquartered in Redwood City, Ca., SSN is a privately held company backed by Foundation Capital, Kleiner Perkins Caufield & Byers and Northgate Capital. The company has over 200 employees and a global reach, with partnerships in Australia, the U.K. and Brazil.

SSN is the leading smart grid solutions provider, with successful deployments with utilities serving 20 percent of the U.S. population, including Florida Power & Light (FPL), Pacific Gas & Electric (PG&E), Oklahoma Gas & Electric (OG&E) and Pepco Holdings, Inc. (PHI), among others.

FPL is one of the largest electric utilities in the U.S., serving approximately 4.5 million customers across Florida. In 2007, SSN and FPL partnered to deploy SSN’s Smart Energy Network to 100,000 FPL customers. It began with rigorous environmental and reliability testing to ensure that SSN’s technology would hold up under the harsh environmental conditions in some areas of Florida. Few companies are able to sustain the scale and quality of testing that FPL required during this deployment, including power outage notification testing, exposure to water and salt spray and network throughput performance test for self-healing failover characteristics.

SSN’s solution has met or exceeded all FPL acceptance criteria. FPL plans to continue deployment of SSN’s Smart Energy Network at a rate of one million networked meters per year beginning in 2010 to all 4.5 million residential customers.

PG&E is currently rolling out SSN’s Smart Energy Network to all 5 million electric customers over a 700,000 square-mile service area.

OG&E, a utility serving 770,000 customers in Oklahoma and western Arkansas, worked with SSN to deploy a small-scale pilot project to test The Smart Energy Network and gauge customer satisfaction. The utility deployed SSN’s network, along with an energy management web-based portal in 25 homes in northwest Oklahoma City. Another 6,600 apartments were given networked meters to allow remote initiation and termination of service.

Consumer response to the project was overwhelmingly positive. Participating residents said they gained flexibility and control over their household’s energy consumption by monitoring their usage on in-home touch screen information panels. According to one customer, “It’s the three A’s: awareness, attitude and action. It increased our awareness. It changed our attitude about when we should be using electricity. It made us take action.”

Based on the results, OG&E presented a plan for expanded deployment to the Oklahoma Corporation Commission for their consideration.

PHI recently announced its partnership with SSN to deliver The Smart Energy Network to its 1.9 million customers across Washington, D.C., Delaware, Maryland and New Jersey. The first phase of the smart grid deployment will begin in Delaware in March 2009 and involve SSN’s advanced metering and distribution automation technology. Additional deployment will depend on regulatory authorization.

The impact of energy efficiency is enormous. More aggressive energy efficiency efforts could cut the growth rate of worldwide energy consumption by more than half over the next 15 years, according to the McKinsey Global Institute. The Brattle Group states that demand response could reduce peak load in the U.S. by at least 5 percent over the next few years, saving over $3 billion per year in electricity costs. The discounted present value of these savings would be $35 billion over the next 20 years in the U.S. alone, with significantly greater savings worldwide.

Governments throughout the EU, Canada and Australia are now mandating implementation of alternate energy and grid efficiency network programs. The Smart Energy Network is the technology platform that makes energy efficiency and the smart grid possible. And, it is working in the field today.

Managing Communications Change

Change is being forced upon the utilities industry. Business drivers range from stakeholder pressure for greater efficiency to the changing technologies involved in operational energy networks. New technologies such as intelligent networks or smart grids, distribution automation or smart metering are being considered.

The communications network is becoming the key enabler for the evolution of reliable energy supply. However, few utilities today have a communications network that is robust enough to handle and support the exacting demands that energy delivery is now making.

It is this process of change – including the renewal of the communications network – that is vital for each utility’s future. But for the utility, this is a technological step change requiring different strategies and designs. It also requires new skills, all of which have been implemented in timescales that do not sit comfortably with traditional technology strategies.

The problems facing today’s utility include understanding the new technologies and assessing their capabilities and applications. In addition, the utility has to develop an appropriate strategy to migrate legacy technologies and integrate them with the new infrastructure in a seamless, efficient, safe and reliable manner.

This paper highlights the benefits utilities can realize by adopting a new approach to their customers’ needs and engaging a network partner that will take responsibility for the network upgrade, its renewal and evolution, and the service transition.

The Move to Smart Grids

The intent of smart grids is to provide better efficiency in the production, transport and delivery of energy. This is realized in two ways:

  • Better real-time control: ability to remotely monitor and measure energy flows more closely, and then manage those flows and the assets carrying them in real time.
  • Better predictive management: ability to monitor the condition of the different elements of the network, predict failure and direct maintenance. The focus is on being proactive to real needs prior to a potential incident, rather than being reactive to incidents, or performing maintenance on a repetitive basis whether it is needed or not.

These mechanisms imply more measurement points, remote monitoring and management capabilities than exist today. And this requires a greater reliance on reliable, robust, highly available communications than has ever been the case before.

The communications network must continue to support operational services independently of external events, such as power outages or public service provider failure, yet be economical and simple to maintain. Unfortunately, the majority of today’s utility communications implementations fall far short of these stringent requirements.

Changing Environment

The design template for the majority of today’s energy infrastructure was developed in the 1950s and 1960s – and the same is true of the associated communications networks.

Typically, these communications networks have evolved into a series of overlays, often of different technology types and generations (see Figure 1). For example, protection tends to use its own dedicated network. The physical realization varies widely, from tones over copper via dedicated time division multiplexing (TDM) connections to dedicated fiber connections. These generally use a mix of privately owned and leased services.

Supervisory control and data acquisitions systems (SCADA) generally still use modem technology at speeds between 300 baud to 9.6k baud. Again, the infrastructure is often copper or TDM running as one of many separate overlay networks.

Lastly, operational voice services (as opposed to business voice services) are frequently analog on yet another separate network.

Historically, there were good operational reasons for these overlays. But changes in device technology (for example, the evolution toward e-SCADA based on IP protocols), as well as the decreasing support by communications equipment vendors of legacy communications technologies, means that the strategy for these networks has to be reassessed. In addition, the increasing demand for further operational applications (for example, condition monitoring, or CCTV, both to support substation automation) requires a more up-to-date networking approach.

Tomorrow’s Network

With the exception of protection services, communications between network devices and the network control centers are evolving toward IP-based networks (see Figure 2). The benefits of this simplified infrastructure are significant and can be measured in terms of asset utilization, reduced capital and operational costs, ease of operation, and the flexibility to adapt to new applications. Consequently, utilities will find themselves forced to seriously consider the shift to a modern, homogeneous communications infrastructure to support their critical operational services.

Organizing For Change

As noted above, there are many cogent reasons to transform utility communications to a modern, robust communications infrastructure in support of operational safety, reliability and efficiency. However, some significant considerations should be addressed to achieve this transformation:

Network Strategy. It is almost inevitable that a new infrastructure will cross traditional operational and departmental boundaries within the utility. Each operational department will have its own priorities and requirements for such a network, and traditionally, each wants some, or total, control. However, to achieve real benefits, a greater degree of centralized strategy and management is required.

Architecture and Design. The new network will require careful engineering to ensure that it meets the performance-critical requirements of energy operations. It must maintain or enhance the safety and reliability of the energy network, as well as support the traffic requirements of other departments.

Planning, Execution and Migration. Planning and implementation of the core infrastructure is just the start of the process. Each service requires its own migration plan and has its own migration priorities. Each element requires specialist technical knowledge, and for preference, practical field experience.

Operation. Gone are the days when a communications failure was rectified by sending an engineer into the field to find the fault and to fix it. Maintaining network availability and robustness calls for sound operational processes and excellent diagnostics before any engineer or technician hits the road. The same level of robust centralized management tools and processes that support the energy networks have to be put in place to support communications network – no matter what technologies are used in the field.

Support. Although these technologies are well understood by the telecommunications industry, they are likely to be new to the energy utilities industry. This means that a solid support organization familiar with these technologies must be implemented. The evolution process requires an intense level of up-front skills and resources. Often these are not readily available in-house – certainly not in the volume required to make any network renewal or transformation effective. Building up this skill and resource base by recruitment will not necessarily yield staff that is aware of the peculiarities of the energy utilities market. As a result, there will be significant time lag from concept to execution, and considerable risk for the utility as it ventures alone into unknown territory.

Keys To Successful Engagement

Engaging a services partner does not mean ceding control through a rigid contract. Rather, it means crafting a flexible relationship that takes into consideration three factors: What is the desired outcome of the activity? What is the best balance of scope between partner assistance and in-house performance to achieve that outcome? How do you retain the flexibility to accommodate change while retaining control?

Desired outcome is probably the most critical element and must be well understood at the outset. For one utility, the desired outcome may be to rapidly enable the upgrade of the complete energy infrastructure without having to incur the upfront investment in a mass recruitment of the required new communications skills.

For other utilities, the desired outcome may be different. But if the outcomes include elements of time pressure, new skills and resources, and/or network transformation, then engaging a services partner should be seriously considered as one of the strategic options.

Second, not all activities have to be in scope. The objective of the exercise might be to supplement existing in-house capabilities with external expertise. Or, it might be to launch the activity while building up appropriate in-house resources in a measured fashion through the Build-Operate- Transfer (BOT) approach.

In looking for a suitable partner, the utility seeks to leverage not only the partner’s existing skills, but also its experience and lessons learned performing the same services for other utilities. Having a few bruises is not a bad thing – this means that the partner understands what is at stake and the range of potential pitfalls it may encounter.

Lastly, retaining flexibility and control is a function of the contract between the two parties which should be addressed in their earliest discussions. The idea is to put in place the necessary management framework and a robust change control mechanism based on a discussion between equals from both organizations. The utility will then find that it not only retains full control of the project without having to take day-to-day responsibility for its management, but also that it can respond to change drivers from a variety of sources – such as technology advances, business drivers, regulators and stakeholders.

Realizing the Benefits

Outsourcing or partnering the communications transformation will yield benefits, both tangible and intangible. It must be remembered that there is no standard “one-size-fits-all” outsourcing product. Thus, the benefits accrued will depend on the details of the engagement.

There are distinct tangible benefits that can be realized, including:

Skills and Resources. A unique benefit of outsourcing is that it eliminates the need to recruit skills not available internally. These are provided by the partner on an as-needed basis. The additional advantage for the utility is that it does not have to bear the fixed costs once they are no longer required.

Offset Risks. Because the partner is responsible for delivery, the utility is able to mitigate risk. For example, traditionally vendors are not motivated to do anything other than deliver boxes on time. But with a well-structured partnership, there is an incentive to ensure that the strategy and design are optimized to economically deliver the required services and ease of operation. Through an appropriate regime of business-related key performance indicators (KPIs), there is a strong financial incentive for the partner to operate and upgrade the network to maintain peak performance – something that does not exist when an in-house organization is used.

Economies of Scale. Outsourcing can bring the economies of scale resulting from synergies together with other parts of the partner’s business, such as contracts and internal projects.

There also are many other benefits associated with outsourcing that are not as immediately obvious and commercially quantifiable as those listed above, but can be equally valuable.

Some of these less tangible benefits include:

Fresh Point of View. Within most companies, employees often have a vested interest in maintaining the status quo. But a managed services organization has a vested interest in delivering the best possible service to the customer – a paradigm shift in attitude that enables dramatic improvements in performance and creativity.

Drive to Achieve Optimum Efficiency. Executives, freed from the day-to-day business of running the network, can focus on their core activities, concentrating on service excellence rather than complex technology decisions. To quote one customer, “From my perspective, a large amount of my time that might have in the past been dedicated to networking issues is now focused on more strategic initiatives concerned with running my business more effectively.”

Processes and Technologies Optimization. Optimizing processes and technologies to improve contract performance is part of the managed services package and can yield substantial savings.

Synergies with Existing Activities Create Economies of Scale. A utility and a managed services vendor have considerable overlap in the functions performed within their communications engineering, operations and maintenance activities. For example, a multi-skilled field force can install and maintain communications equipment belonging to a variety of customers. This not only provides cost savings from synergies with the equivalent customer activity, but also an improved fault response due to the higher density of deployed staff.

Access to Global Best Practices. An outsourcing contract relieves a utility of the time-consuming and difficult responsibility of keeping up to speed with the latest thinking and developments in technology. Alcatel-Lucent, for example, invests around 14 percent of its annual revenue into research and development; its customers don’t have to.

What Can Be Outsourced?

There is no one outsourcing solution that fits all utilities. The final scope of any project will be entirely dependent on a utility’s specific vision and current circumstances.

The following list briefly describes some of the functions and activities that are good possibilities for outsourcing:

Communications Strategy Consulting. Before making technology choices, the energy utility needs to define the operational strategy of the communications network. Too often communications is viewed as “plug and play,” which is hardly ever the case. A well-thought-out communications strategy will deliver this kind of seamless operation. But without that initial strategy, the utility risks repeating past mistakes and acquiring an ad-hoc network that will rapidly become a legacy infrastructure, which will, in turn, need replacing.

Design. Outsourcing allows utilities to evolve their communications infrastructure without upfront investment in incremental resources and skills. It can delegate responsibility for defining network architecture and the associated network support systems. A utility may elect to leave all technological decisions to the vendor and merely review progress and outcomes. Or, it may retain responsibility for technology strategy, and turn to the managed services vendor to turn the strategy into architecture and manage the subsequent design and project activities.

Build. Detailed planning of the network, the rollout project and the delivery of turnkey implementations all fall within the scope of the outsourcing process.

Operate, Administer and Maintain. Includes network operations and field and support services:

  • Network Operations. A vendor such as Alcatel-Lucent has the necessary experience in operating Network Operations Centers (NOCs), both on a BOT and ongoing basis. This includes handling all associated tasks such as performance and fault monitoring, and services management.
  • Network and Customer Field Services. Today, few energy utilities consider outside maintenance and provisioning activities to be a strategic part of their business and recognize they are prime candidates for outsourcing. Activities that can be outsourced include corrective and preventive maintenance, network and service provisioning, and spare parts management, return and repair – in other words, all the daily, time-consuming, but vitally important elements for running a reliable network.
  • Network Support Services. Behind the first-line activities of the NOC are a set of engineering support functions that assist with more complex faults – these are functions that cannot be automated and tend to duplicate those of the vendor’s. The integration and sharing of these functions enabled by outsourcing can significantly improve the utility’s efficiency.

Conclusion

Outsourcing can deliver significant benefits to a utility, both in terms of its ability to invest in and improve its operation and associated costs. However, each utility has its own unique circumstances, specific immediate needs, and vision of where it is going. Therefore, each technical and operational solution is different.

Meeting Future Utility Operating Challenges With a Smart Grid

The classical school of utility operations prescribes four priorities, ranked in the following descending order: safety, reliability, customer service and profit. Although it’s not hard to engage any number of industry insiders in an argument over whether profit in the classical model has recently switched places with customer service (and/or whether it should), most people accept that safety and reliability still reign supreme when it comes to operating a utility. This is true whether one takes a policy-, economic-, utility- or customer-oriented perspective.

Over many decades the utility industry has established a remarkably consistent pattern of power delivery based on the above-described priorities. Large, centralized generation facilities produce electricity from various sources interconnected via a networked transmission system feeding a predominantly radial distribution system. This classical power distribution system supports a predictable demand pattern that utilities can typically manage by using analytics such as similar day load forecasting. Moreover, future demand is also predictable, since average loads have been growing consistently by just a few percentage points annually, year in and year out.

To support this power delivery model, utilities also employ remarkably consistent system design and operational processes. Although any given utility might employ slightly different processes and procedures at varying degrees of efficiency and effectiveness – or deploy operating assets with slightly different design specifications – the underlying elements are generally consistent from one utility to another. They are engineered to either fail safe (safety) and/or not to fail at all (reliability) based on long-term operating patterns.

So why implement a smart grid? After all, the classical method of managing supply and demand has worked reasonably well over the decades. The system is safe and reliable, and most utilities are very profitable even in economic downtimes. However, a smart grid has three interrelated attributes – transparency, conditionality and kinematics – that together radically improve the “situational awareness” of the real-time state of the grid for both utilities and customers.

With this situational awareness comes the high system-state observability (transparency) that drives conditional management (conditionality) of the grid. All of this will ultimately support future power delivery patterns, which will be much more complex and difficult to predict and manage because demand and supply will fluctuate much more radically than at present (kinematics).

TRANSPARENCY

Price transparency is the foundation on which deregulated and competitive markets are built. However, until now price transparency has been limited primarily to wholesale transmission and generation domains. Indeed, the lack of price transparency at the point of distribution (that is, at retail) is a key reason deregulation has stalled in the United States.

Price transparency is of course only one aspect of the issue. Utilities must also synchronize usage transparency with price transparency based on time. That is, the value of knowing real-time pricing is diminished if a customer cannot also see their real-time usage and make energy usage behavior changes in relation to the real-time price signals.

From the utility’s perspective, usage transparency is limited. That’s because the distribution elements of most utility operations are largely opaque to operators. Once beyond the substation, usage disruptions are primarily identified by induction from fault conditions and usage patterns recorded a month after the disruption occurred via meter readings. For example, a distribution circuit may be substantially overloaded, but in most cases the utility won’t know until it fails. And when a failure does occur, utilities still depend on manual processes to determine the precise location and cause of the fault. The customer loads or network conditions that precipitated the failure can only be analyzed well after the event.

A smart grid significantly improves the level of visibility into the distribution grid. Smart meters, line sensors and the embedded processing that takes place within system assets such as switches and reclosers all provide a stream of real-time and near real-time data to the utility about the current operational state of the grid. The result: a dramatic improvement in utilities’ awareness of the state of the distribution grid.

CONDITIONALITY

As is the case with transparency, the consumer’s perspective of conditionality is more mature than the utility’s perspective. For example, the idea of the smart building is all about implementing a mini premise-side smart grid within the customer location and installing simple devices such as motion detectors that turn lights on or off in a room. Commercial energy management systems use even more sophisticated ways of optimizing the lighting, heating and other environmental parameters of a work or living space.

From the utility’s perspective, however, conditionality is much less advanced. In today’s operating world, most maintenance or repair activities take place either too late or too soon. When utilities wait until something in the infrastructure fails, it’s too late. If the grid is inspected based on some set time schedule irrespective of its condition, it’s too soon. Utilities thus fall into a pattern of either fault- or usage-based maintenance.

The alternative – condition-based maintenance – is already being used in many industries. The difference in the utilities industry is that outside of energy generation and transmission activities, there’s little data on the ongoing real-time condition of most of the assets a utility utilizes to provide its customers with service.

The chief benefit of conditionality is that it allows utilities to optimize asset utilization in both over- and under-use situations (Figure 1).

Conditionality also opens up opportunities for utilities to fully automate their utility distribution operations. Not only will this enable them to provide more reliable service to customers, it reduces the need for human intervention and thus dramatically cuts labor costs. In addition, automation can be used to mitigate the utilities industry’s looming problem of an aging workforce. For these and other reasons, conditionality is one of the most important contributions the smart grid will make to the industry.

KINEMATICS

In classical physics, kinematics studies how the position of an object changes with time. In today’s utility operations, neither load nor supply is particularly kinematical because changes to either take a long time and occur slowly (in normal operating conditions) and both can be reliably predicted.

Many industry observers, however, believe that this scenario is about to change dramatically. One thing that’s expected to drive this change is “distributed generation.” Under this scenario, instead of relying on large centralized generation, the industry will see significant growth in distribution-side generation technologies. Unlike today, much of this supply will not be centrally dispatched or under direct central control. The resulting energy supply will be much more complex to predict and manage. To the futurist this may seem like an exciting prospect, but to a grid operator or a utility, this represents a control and management nightmare, because it directly challenges the operational priorities of safety and reliability.

Hybrid and electric automobiles will also substantially alter the pattern of supply and load on the current grid. According to some predictions, electric automobiles will account for upwards of 20 percent of the automobile fleet in the United States in the coming decades. This means that millions of automobiles charging each night could increase customer load profiles over time by upwards of 30 to 50 percent. When coupled with even more futuristic ideas such as “vehicle to grid,” you end up with energy consumption scenarios that no one imagined when the grid was built.

CONCLUSION

The three attributes of the smart grid – transparency, conditionality and kinematics – are interrelated. Transparency provides situational awareness, which enables conditionality. And conditionality likewise is a requirement for managing the kinematic supply and load patterns of the future. But more importantly, the smart grid is the only way the classical operating priorities of the system can be sustained – or enhanced – given the upcoming expected changes to the industry.

Making Change Work: Why Utilities Need Change Management

Many times organizations are reluctant to engage change management programs, plans and teams. More often, change management programs are launched too late in the project process, are only moderately funded or are absorbed within the team as part-time responsibilities – all of which we’ve seen happen time and again in the utility industry.

“Making Change Work,” an IBM study done in collaboration with the Center of Evaluation and Methods at Bonn University, analyzed the factors for successful implementation of change. The scope of this study, released in 2007, is now being expanded because the project management and change management professions, formerly aligned, are now at a turning point of differentiation. The reason is simple: too many projects fail to consider both components as critical to success – and therefore lack insight into the day-today impact of a change on members of the organization.

Despite this, many organizations have been reluctant to implement change management programs, plans and teams. And when they have put such programs in place, the programs tend to be launched too late in the project process, are inadequately funded or are perceived as part-time tasks that can be assigned to members of the project management team.

WHAT IS CHANGE MANAGEMENT?

Change management is a structured approach to business transformation that manages the transition from a current state to a desired future state. Far from being static or rigid, change management is an ever-evolving program that varies with the needs of the organization. Effective change management involves people and provides open communication.

Change management is as important as project management. However, whereas project management is a tactical activity, change management represents a strategic initiative. To understand the difference, consider the following

  • Change management is the process of driving corporate strategy by identifying, addressing and managing barriers to change across the organization or enterprise.
  • Project management is the process of implementing the tools needed to enable or mobilize the corporate strategy.

Change management is an ongoing process that works in close concert with project management. At any given time at least one phase of change management should be occurring. More likely, multiple phases will be taking place across various initiatives.

A change management program can be tailored to manage the needs of the organizational culture and relationships. The program must close the gaps among workforce, project team and sponsor leadership during all phases of all projects. It does this by:

  • Ensuring proper alignment of the organization with new technology and process requirements;
  • Preparing people for new processes and technology through training and communication;
  • Identifying and addressing human resource implications such as job definitions, union negotiations and performance measures;
  • Managing the reaction of both individuals and the entire organization to change; and
  • Providing the right level of support for ongoing implementation success.

The three fundamental activities of a change management program are leading, communicating and engaging. These three activities should span the project life cycle to maintain both awareness of the change and its momentum (Figure 1).

KEY ELEMENTS OF A CHANGE PROGRAM

There are three best practice elements that make the difference between successful projects and less successful projects: [1]

Organizational awareness for the challenges inherent in any change. This involves the following:

  • Getting a real understanding of – and leadership buy-in to – the stakeholders and culture;
  • Recognizing the interdependence of strategy and execution;
  • Ensuring an integrated strategy approach linking business strategy, operations, organization design and change and technology strategy; and
  • Educating leadership on change requirements and commitment.

Consistent use of formal methods for change management. This should include:

  • Covering the complete life cycle – from definition to deployment to post-implementation optimization;
  • Allowing for easy customization and flexibility through a modular design;
  • Incorporating change management and value realization components into each phase to increase the likelihood of success; and
  • Providing a published plan with ongoing accountability and sponsorship as well as continuous improvement.

A specified share of the project budget that is invested in change management. This should involve:

  • Investing in change linked to project success. Projects that invest more than 10 percent of the project budget have an average of 45 percent success (Figure 2). [2]
  • Assigning the right resources to support change management early on and maintaining the required support. This also limits the adverse impacts of change on an organization’s productivity (Figure 3). [3]

WHY DO UTILITIES NEED CHANGE MANAGEMENT?

Utilities today face a unique set of challenges. For starters, they’re simultaneously dealing with aging infrastructures and aging workforces. In addition, there are market pressures to improve performance, become more “green” and mitigate rising energy costs. To address these realities, many utilities are seeking mergers and acquisition (M&A) opportunities as well as implementing new technologies.

The cost cutting of the past decade combined with M&As has left utilities with gaps in workforce experience as well as budget challenges. Yet utilities are facing major business disruptions going into the next decade and beyond. To cope with these disruptions, companies are implementing new technologies such as the intelligent grid, advanced metering infrastructure (AMI), meter data management (MDM), enterprise asset management (EAM) and work management systems (WMS’s). It’s not uncommon for utilities to be implementing multiple new systems simultaneously that affect the day-to-day activities of people throughout the organization, from frontline workers to senior managers.

A change management program can address a number of challenges specific to the utilities industry.

CULTURAL CLIMATE: ‘BUT WE’RE DIFFERENT’

A utility is a utility is a utility. But a deeper look into individual businesses reveals nuances in their relationships with both internal and external stakeholders that are unique to each company. A change management team must intimately understand these relationships. For example, externally how is the utility perceived by regulators, customers, the community and even analysts? As for internal relationships, how do various operating divisions relate and work together? Some operating divisions work well together on project teams and respect each other and their differences; others do not.

There may be cultural differences, but work is work. Only change management can address these relationships. Knowing the utility’s cultural climate and relationships will help shape each phase of the change management program, and allow change management professionals to customize a project or system implementation to fit a company’s culture.

REGULATORY LANDSCAPE

With M&As and increasing market pressures across the United States, the regulatory landscape confronting utilities is becoming more variable. We’ve seen several types of regulatory-related challenges.

Regulatory pressure. Whether regulators mandate or simply encourage new technology implementations can make a significant difference in how stakeholders in a project behave. In general, there’s more resistance to a new technology when it’s required versus voluntarily implemented. Change management can help work through participant behaviors and mitigate obstacles so that project work can continue as planned.

Multiple regulatory jurisdictions. Many utilities with recently expanded footprints following M&As now have to manage requests from and expectations of multiple regulatory commissions. Often these commissions have different mandates. Change management initiatives are needed to work through the complexity of expectations, manage multiple regulatory relationships and drive utilities toward a unified corporate strategy.

Regulatory evolution. Just as markets evolve, so do regulatory influences and mandates. Often regulators will issue orders that can be interpreted in many ways. They may even do this to get information in the form of reactions from their various constituents. Whatever the reason, the reality is that utilities are managing an ever-changing portfolio of regulations. Change management can better prepare utilities for this constant change.

OPERATIONS MATURITY

When new systems and technologies being implemented encompass multiple operating divisions, it can be difficult for stakeholders to agree on operating standards or processes. Project team members representing the various operating regions can resist compromise for fear of losing control. This often occurs when utilities are attempting to integrate systems across operating regions following an acquisition.

Change management helps ensure that various constituents – for example, the regional operating divisions – are prepared for eminent business transformation. In large organizations, this preparation period can take a year or more. But for organizations to realize the benefits of new systems and technology implementations, they must be ready to receive the benefits. Readiness and preparedness are largely the responsibilities of the change management team.

ORGANIZATIONAL COHESIVENESS

The notion of organizational cohesiveness is that across the organization all constituents are equally committed to the business transformation initiative and have the same understanding of the overarching corporate strategy while also performing their individual roles and responsibilities.

Senior executives must align their visions and common commitment to change. After all, they set the tone for change through their respective organizations. If they are not in sync with each other, their organizations become silos, and business processes are less likely to be fluid across organizational boundaries. Frontline managers and associates must, in turn, be engaged and enthusiastic about the transformations to come.

Organizational cohesiveness is especially critical during large systems implementations involving utility field operations. Leaders at multiple locations must be ready to communicate and support change – and this support must be visible to the workforce. Utilities must understand this requirement at the beginning of a project to make change manageable, realistic and personal enough to sustain momentum. All too often, we’ve heard team members comment, “We had a lot of leadership at the project kickoff, but we really haven’t seen leadership at any of our activities or work locations since then. The project team tells us what to do.”

Moreover, leadership – when removed from the project – usually will not admit that they’re in the dark about what’s going on. Yet their lack of involvement will not escape the attention of frontline employees. Once the supervisor is perceived as lacking information – and therefore power – it’s all over. Improving customer service and quality, cutting costs and adopting new technology-merging operations all require changing employees. [4]

For utilities, the concept of organizational cohesiveness is especially important because just as much technology “lives” outside IT as inside. Yet the engineers who use this non-IT-controlled technology – what Gartner calls “operations technology” – are usually disconnected from the IT world in terms of both practical planning and execution. However, these worlds must act as one for a company to be truly agile. [5]

Change management methods and tools ensure that organization cohesiveness exists through project implementation and beyond.

UNION ENGAGEMENT

Successful change occurs with a sustained partnership among union representatives throughout the project life cycle. Project leadership and union leadership must work together and partner to implement change. Union representation should be on the project team. Representatives can be involved in process reviews, testing and training, or asked to serve as change champions. In addition, communication is critical throughout all phases of a project. Frontline employees must see real evidence of how this change will benefit them. Change is personal: everyone wants to know how his or her job will be impacted.

There should also be union representation in training activities, since workers tend to be more receptive to peer-to-peer support. Utilities should, for example, engage union change champions to help co-workers during training and to be site “go to” representatives. Utilities should also provide advance training and recognize all who participate in it.

Union representatives should also participate in design and/or testing, since they will be able to pinpoint issues that will impact routine daily tasks. It could be something as simple as changing screen labels per their recommendation to increase user understanding.

More than one union workforce may be involved in a project. Location cultures that exist in large service territories or that have resulted from mergers may try to isolate themselves from the project team and resist change. Utilities should assemble a team from various work groups and then do the following to address the history and differences in the workforce:

  • Request ongoing union participation throughout the life of the project.
  • Include union roles as part of the project charter and define these roles with union leadership.
  • Provide a kickoff overview to union leadership.
  • Include union representation in work process development with balanced representation from various areas. Union employees know the job and can quickly identify the pros and cons of work tasks. A structured facilitation process and issue resolution process is required.
  • Assign a corporate human resource or labor relations role to review processes that impact the union workforce.
  • Develop communication campaigns that address union concerns, such as conducting face-to-face presentations at employing locations and educating union leaders prior to each change rollout.
  • Involve union representatives in training and user support.

Change management is necessary to sort through the relationships of multiple union workforces so that projects and systems can be implemented.

AN AGING WORKFORCE

A successful change management program will help mitigate the aging workforce challenges utilities will be facing for many years to come.

WHAT TO EXPECT FROM A SUCCESSFUL CHANGE MANAGEMENT PROGRAM

The result of a successful change management program is a flexible organization that’s responsive to customer needs, regulatory mandates and market pressures, and readily embraces new technologies and systems. A change-ready organization anticipates, expects and is increasingly comfortable with change and exhibits the following characteristics:

  • The organization is aligned.
  • The leaders are committed.
  • Business processes are developed and defined across all operational units.
  • Associates at all levels have received communications and have continued access to resources.

Facing major business transformations and unique industry challenges, utilities cannot afford not to engage change management programs. This skill set is just as critical as any other role in your organization. Change is a cost. Change should be part of the project budget.

Change is an ongoing, long-term investment. Good change management designed specifically for your culture and challenges minimizes change’s adverse effect on daily productivity and helps you reach and sustain project goals.

ENDNOTES

  1. “Making Change Work” (an IBM study), Center of Evaluation and Methods, Bonn University, 2007; excerpts from “IBM Integrated Strategy and Change Methodology,” 2007.
  2. “Making Change Work,” Center of Evaluation and Methods, Bonn University, 2007.
  3. Ibid.
  4. T.J. Larkin and Sandar Larkin, “Communicating Change: Winning Employee Support for New Business Goals,” McGraw Hill, 1994, p. 31.
  5. K. Steenstrup, B. Williams, Z. Sumic, C. Moore; “Gartner’s Energy and Utilities Summit: Agility on Both Sides of the Divide”; Gartner Industry Research ID Number G00145388; Jan. 30, 2007; p. 2.
  6. P. R. Bruffy and J. Juliano, “Addressing the Aging Utility Workforce Challenge: ACT NOW,” Montgomery Research 2006 journal.

The Distributed Utility of the (Near) Future

The next 10 to 15 years will see major changes – what future historians might even call upheavals – in the way electricity is distributed to businesses and households throughout the United States. The exact nature of these changes and their long-term effect on the security and economic well-being of this country are difficult to predict. However, a consensus already exists among those working within the industry – as well as with politicians and regulators, economists, environmentalists and (increasingly) the general public – that these fundamental changes are inevitable.

This need for change is in evidence everywhere across the country. The February 26, 2008, temporary blackout in Florida served as just another warning that the existing paradigm is failing. Although at the time of this writing, the exact cause of that blackout had not yet been identified, the incident serves as a reminder that the nationwide interconnected transmission and distribution grid is no longer stable. To wit: disturbances in Florida on that Tuesday were noted and measured as far away as New York.

A FAILING MODEL

The existing paradigm of nationwide grid interconnection brought about primarily by the deregulation movement of the late 1990s emphasizes that electricity be generated at large plants in various parts of the country and then distributed nationwide. There are two reasons this paradigm is failing. First, the transmission and distribution system wasn’t designed to serve as a nationwide grid; it is aged and only marginally stable. Second, political, regulatory and social forces are making the construction of large generating plants increasingly difficult, expensive and eventually unfeasible.

The previous historic paradigm made each utility primarily responsible for generation, transmission and distribution in its own service territory; this had the benefit of localizing disturbances and fragmenting responsibility and expense. With loose interconnections to other states and regions, a disturbance in one area or a lack of resources in a different one had considerably less effect on other parts of the country, or even other parts of service territories.

For better or worse, we now have a nationwide interconnected grid – albeit one that was neither designed for the purpose nor serves it adequately. Although the existing grid can be improved, the expense would be massive, and probably cost prohibitive. Knowledgeable industry insiders, in fact, calculate that it would cost more than the current market value of all U.S. utilities combined to modernize the nationwide grid and replace its large generating facilities over the next 30 years. Obviously, the paradigm is going to have to change.

While the need for dramatic change is clear, though, what’s less clear is the direction that change should take. And time is running short: North American Electric Reliability Corp. (NERC) projects serious shortages in the nation’s electric supply by 2016. Utilities recognize the need; they just aren’t sure which way to jump first.

With a number of tipping points already reached (and the changes they describe continuing to accelerate), it’s easy to envision the scenario that’s about to unfold. Consider the following:

  • The United States stands to face a serious supply/demand disconnect within 10 years. Unless something dramatic happens, there simply won’t be nearly enough electricity to go around. Already, some parts of the country are feeling the pinch. And regulatory and legislative uncertainty (especially around global warming and environmental issues) makes it difficult for utilities to know what to do. Building new generation of any type other than “green energy” is extremely difficult, and green energy – which currently meets less than 3 percent of U.S. supply needs – cannot close the growing gap between supply and demand being projected by NERC. Specifically, green energy will not be able to replace the 50 percent of U.S. electricity currently supplied by coal within that 10-year time frame.
  • Fuel prices continue to escalate, and the reliability of the fuel supply continues to decline. In addition, increasing restrictions are being placed on fuel selection, especially coal.
  • A generation of utility workers is nearing retirement, and finding adequate replacements among the younger generation is proving increasingly difficult.
  • It’s extremely difficult to site new transmission – needed to deal with supply-and-demand issues. Even new Federal Energy Regulatory Commission (FERC) authority to authorize corridors is being met with virulent opposition.

SMART GRID NO SILVER BULLET

Distributed generation – including many smaller supply sources to replace fewer large ones – and “smart grids” (designed to enhance delivery efficiency and effectiveness) have been posited as solutions. However, although such solutions offer potential, they’re far from being in place today. At best, smart grids and smarter consumers are only part of the answer. They will help reduce demand (though probably not enough to make up the generation shortfall), and they’re both still evolving as concepts. While most utility executives recognize the problems, they continue to be uncertain about the solutions and have a considerable distance to go before implementing any of them, according to recent Sierra Energy Group surveys.

According to these surveys, more than 90 percent of utility executives now feel that the intelligent utility enterprise and smart grid (IUE/SG) – that is, the distributed utility – represents an inevitable part of their future (Figure 1). This finding was true of all utility types supplying electricity.

Although utility executives understand the problem and the IUE/SG approach to solving part of it, they’re behind in planning on exactly how to implement the various pieces. That “planning lag” for the vision can be seen in Figure 2.

At least some fault for the planning lag can be attributed to forces outside the utilities. While politicians and regulators have been emphasizing conservation and demand response, they’ve failed to produce guidelines for how this will work. And although a number of states have established mandatory green power percentages, Congress failed to do the same in an Energy Policy Act (EPACT) adopted in December 2007. While the EPACT of 2005 “urged” regulators to “urge” utilities to install smart meters, it didn’t make their installation a requirement, and thus regulators have moved at different speeds in different parts of the country on this urging.

Although we’ve entered a new era, utilities remain burdened with the internal problems caused by the “silo mentality” left over from generations of tight regulatory control. Today, real-time data is often still jealously guarded in engineering and operations silos. However, a key component in the development of intelligent utilities will be pushing both real-time and back-office data onto dashboards so that executives can make real-time decisions.

Getting from where utilities were (and in many respects still are) in the last century to where they need to be by 2018 isn’t a problem that can be solved overnight. And, in fact, utilities have historically evolved slowly. Today’s executives know that technological evolution in the utility industry needs to accelerate rapidly, but they’re uncertain where to start. For example, should you install an advanced metering structure (AMI) as rapidly as possible? Do you emphasize automating the grid and adding artificial intelligence? Do you continue to build out mobile systems to push data (and more detailed, simpler instructions) to field crews who soon will be much younger and less experienced? Do you rush into home automation? Do you build windmills and solar farms? Utilities have neither the financial nor human resources to do everything at once.

THE DEMAND FOR AMI

Its name implies that a smart grid will become increasingly self-operating and self-healing – and indeed much of the technology for this type of intelligent network grid has been developed. It has not, however, been widely deployed. Utilities, in fact, have been working on basic distribution automation (DA) – the capability to operate the grid remotely – for a number of years.

As mentioned earlier, most theorists – not to mention politicians and regulators – feel that utilities will have to enable AMI and demand response/home automation if they’re to encourage energy conservation in an impending era of short supplies. While advanced meter reading (AMR) has been around for a long time, its penetration remains relatively small in the utilities industry – especially in the case of advanced AMI meters for enabling demand response: According to figures released by Sierra Energy Group and Newton-Evans Research Co., only 8 to 10 percent of this country’s utilities were using AMI meters by 2008.

That said, the push for AMI on the part of both EPACT 2005 and regulators is having an obvious effect. Numerous utilities (including companies like Entergy and Southern Co.) that previously refused to consider AMR now have AMI projects in progress. However, even though an anticipated building boom in AMI is finally underway, there’s still much to be done to enable the demand response that will be desperately needed by 2016.

THE AUTOMATED HOME

The final area we can expect the IUE/SG concept to envelope comes at the residential level. With residential home automation in place, utilities will be able to control usage directly – by adjusting thermostats or compressor cycling, or via other techniques. Again, the technology for this has existed for some time; however, there are very few installations nationwide. A number of experiments were conducted with home automation in the early- to mid-1990s, with some subdivisions even being built under the mantra of “demand-side management.”

Demand response – the term currently in vogue with politicians – may be considered more politically correct, but the net result is the same. Home automation will enable regulators, through utilities, to ration usage. Although politicians avoid using the word rationing, if global warming concerns continue to seriously impact utilities’ ability to access adequate generation, rationing will be the result – making direct load control at the residential level one of the most problematic issues in the distributed utility paradigm of the future. Are large numbers of Americans going to acquiesce calmly to their electrical supply being rationed? No one knows, but there seem to be few options.

GREEN PRESSURE AND THE TIPPING POINT

While much legitimate scientific debate remains about whether global warming is real and, if so, whether it’s a naturally occurring or man-made phenomenon (arising primarily from carbon dioxide emissions), that debate is diminishing among politicians at every level. The majority of politicians, in fact, have bought into the notion that carbon emissions from many sources – primarily the generation of electricity by burning coal – are the culprit.

Thus, despite continued scientific debate, the political tipping point has been reached, and U.S. politicians are making moves to force this country’s utility industry to adapt to a situation that may or may not be real. Whether or not it makes logical or economic sense, utilities are under increasing pressure to adopt the Intelligent Utility/Smart Grid/Home Automation/Demand Response model – a model that includes many small generation points to make up for fewer large plants. This political tipping point is also shutting down more proposed generation projects each month, adding to the likely shortage. Since 2000, approximately 50 percent of all proposed new coal-fired generation plants have been canceled, according to energy-industry adviser Wood McKenzie (Gas and Power Service Insight, February 2008).

In the distant future, as technology continues to advance, electric generation in the United States will likely include a mix of energy sources, many of them distributed and green. however, there’s no way that in the next 10 years – the window of greatest concern in the NERC projections on the generation and reliability side – green energy will be ready and available in sufficient quantities to forestall a significant electricity shortfall. Nuclear energy represents the only truly viable solution; however, ongoing opposition to this form of power generation makes it unlikely that sufficient nuclear energy will be available within this period. The already-lengthy licensing process (though streamlined somewhat of late by the Nuclear Regulatory Commission) is exacerbated by lawsuits and opposition every step of the way. In addition, most of the necessary engineering and manufacturing processes have been lost in the United States over the last 30 years – the time elapsed since the last U.S. nuclear last plant was built – making it necessary to reacquire that knowledge from abroad.

The NERC Reliability Report of Oct. 15, 2007, points strongly toward a significant shortfall of electricity within approximately 10 years – a situation that could lead to rolling blackouts and brownouts in parts of the country that have never experienced them before. It could also lead to mandatory “demand response” – in other words, rationing – at the residential level. This situation, however, is not inevitable: technology exists to prevent it (including nuclear and cleaner coal now as well as a gradual development of solar, biomass, sequestration and so on over time, with wind for peaking). But thanks to concern over global warming and other issues raised by the environmental community, many politicians and regulators have become convinced otherwise. And thus, they won’t consider a different tack to solving the problem until there’s a public outcry – and that’s not likely to occur for another 10 years, at which point the national economy and utilities may already have suffered tremendous (possibly irreparable) harm.

WHAT CAN BE DONE?

The problem the utilities industry faces today is neither economic nor technological – it’s ideological. The global warming alarmists are shutting down coal before sufficient economically viable replacements (with the possible exception of nuclear) are in place. And the rest of the options are tied up in court. (For example, the United States needs 45 liquefied natural gas plants to be converted to gas – a costly fuel with iffy reliability – but only five have been built; the rest are tied up in court.) As long as it’s possible to tie up nuclear applications for five to 10 years and shut down “clean coal” plants through the political process, the U.S. utility industry is left with few options.

So what are utilities to do? They must get much smarter (IUE/Sg), and they must prepare for rationing (AMI/demand response). As seen in SEG studies, utilities still have a ways to go in these areas, but at least this is a strategy that can (for the most part) be put in place within 10 to 15 years. The technology for IUE/Sg already exists; it’s relatively inexpensive (compared with large-scale green energy development and nuclear plant construction); and utilities can employ it with relatively little regulatory oversight. In fact, regulators are actually encouraging it.

For these reasons, IUE/SG represents a major bridge to a more stable future. Even if today’s apocalyptic scenarios fail to develop – that is, global warming is debunked, or new generation sources develop much more rapidly than expected – intelligent utilities with smart grids will remain a good idea. The paradigm is shifting as we watch – but will that shift be completed in time to prevent major economic and social dislocation? Fasten your seatbelts: the next 10 to 15 years should be very interesting!

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

  1. Gartner’s 2006 CIO Agenda survey.
  2. 2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.

Advanced Metering Infrastructure: The Case for Transformation

Although the most basic operational benefits of an advanced metering infrastructure (AMI) initiative can be achieved by simply implementing standard technological features and revamping existing processes, this approach fails to leverage the full potential of AMI to redefine the customer experience and transform the utility operating model. In addition to the obvious operational benefits – including a significant reduction in field personnel and a decrease in peak load on the system – AMI solutions have the potential to achieve broader strategic, environmental and regulatory benefits by redefining the utility-customer relationship. To capture these broader benefits, however, utilities must view AMI as a transformation initiative, not simply a technology implementation project. Utilities must couple their AMI implementations with a broader operational overhaul and take a structured approach to applying the operating capabilities required to take advantage of AMI’s vast opportunities. One key step in this structured approach to transformation is enterprise-wide business process design.

WHY “AS IS” PROCESSES WON’T WORK FOR AMI

Due to the antiquated and fragmented nature of utility processes and systems, adapting “as is” processes alone will not be sufficient to realize the full range of AMI benefits. Multiple decades of industry consolidation have resulted in utilities with diverse business processes reflecting multiple legacy company operating practices. Associated with these diverse business processes is a redundant set of largely homegrown applications resulting in operational inefficiencies that may impact customer service and reliability, and prevent utilities from adapting to new strategic initiatives (such as AMI) as they emerge.

For example, in the as-is environment, utilities are often slow to react to changes in customer preferences and require multiple functional areas to respond to a simple customer request. A request by a customer to enroll in a new program, for example, will involve at least three organizations within the utility: the call center initially handles the customer request; the field services group manages changing or reprogramming the customer’s meter to support the new program; and the billing group processes the request to ensure that the customer is correctly enrolled in the program and is billed accordingly. In most cases, a simple request like this can result in long delays to the customer due to disjointed processes with multiple hand-off points.

WHY USE AMI AS THE CATALYST FOR OPERATIONAL TRANSFORMATION?

The revolutionary nature of AMI technology and its potential for application to multiple areas of the utility makes an AMI implementation the perfect opportunity to adapt the utility operating structure. To use AMI as a platform for operational transformation, utilities must shift their thought paradigm from functionally based to enterprise-wide, process-centric environments. This approach will ensure that utilities take full advantage of AMI’s technological capabilities without being constrained by existing processes and organizational structures.

If the utility is to offer new programs and services as well as respond to shifting external demands, it must anticipate and respond quickly to changes in behaviors. Rapid information dissemination and quick response to changes in business, environmental and economic situations are essential for utilities that wish to encourage customers to think of energy in a new way and proactively manage their usage through participation in time-of-use and real-time demand response programs. This transition requires that system and organizational hand-offs be integrated to create a seamless and flexible work flow. Without this integration, utilities cannot proactively and quickly adapt processes to satisfy ever-increasing customer expectations. In essence, AMI fails if “smart meters” and “smart systems” are implemented without “smart processes” to support them.

DESIGNING SMART PROCESSES

Designing smart future state business processes to support transformational initiatives such as AMI involves more than just rearranging existing works flows. Instead, a utility must adopt a comprehensive approach to business process design – one that engages stakeholders throughout the organization and that enables them to design processes from the ground up. The utility must also design flexible processes that can adapt to changing customer, technology, business and regulatory expectations while avoiding the pitfalls of the current organization and process structure. As part of a utility’s business process design effort, it must also redefine jobs more broadly, increase training to support those jobs, enable decision making by front-line personnel and redirect rewards systems to focus on processes as well as outcomes. Utilities must also reshape organizational cultures to emphasize teamwork, personnel accountability and the customer’s importance; to redefine roles and responsibilities so that managers oversee processes instead of activities and develop people rather then supervise them; and to realign information system so that they help cross-functional processes work smoothly rather than simply support individual functional areas.

BUSINESS PROCESS DESIGN FRAMEWORK

IBM’s enterprise-wide business process design framework provides a structured approach to the development of the future state processes that support operational transformations and the complexities of AMI initiatives. This framework empowers utilities to apply business process design as the cornerstone of a broader effort to transition to a customer-centric organization capable of engaging external stakeholders. In addition, this framework also supports corporate decision making and continuous improvement by emphasizing real-time metrics and measurement of operational procedures. The framework is made up of the following five phases (Figure 1):

Phase 1 – As-is functional assessment. During this phase, utilities assess their current state processes and supporting organizations and systems. The goal of this phase is to identify gaps, overlaps and conflicts with existing processes and to identify opportunities to leverage the AMI technology. This assessment requires utility stakeholders to dissect existing process throughout the organization and identify instances where the utility is unable to fully meet customer, environmental and regulatory demands. The final step in this phase is to define a set of “future state” goals to guide process development. These goals must address all of the relevant opportunities to both improve existing processes and perform new functions and services.

Phase 2 – Future state process analysis. During this phase, utilities design end-to-end processes that meet the future state goals defined in Phase 1. To complete this effort, utilities must synthesize components from multiple functional areas and think outside the current organizational hierarchy. This phase requires engagement from participants throughout the utility organization, and participants should be encouraged to envision all relevant opportunities for using AMI to improve the utility’s relationship with customers, regulators and the environment. At the conclusion of this phase, all processes should be assessed in terms of their ability to alleviate the current state issues and to meet the future state goals defined in Phase 1.

Phase 3 – Impact identification. During this phase, utilities identify the organizational structure and corporate initiatives necessary to “operationalize” the future state processes. Key questions answered during this phase include how will utilities transition from current to future state? How will each functional area absorb the necessary changes? And what are the new organizations, roles and skills needed? This phase requires the utility to think outside of the current organizational structure to identify the optimal way to support the processes designed in Phase 2. During the impact identification phase of business, it’s crucial that process be positioned as the dominant organizational axis. Because process-organized utilities are not bound to a conventional hierarchy or fixed organizational structure, they can be customer-centric, make flexible use of their resources and respond rapidly to new business situations.

Phase 4 – Socialization. During this phase, utilities focus on obtaining ownership and buy-in from the impacted organizations and broader group of internal and external stakeholders. This phase often involves piloting the new processes and technology in a test environment and reaching out to a small set of customers to solicit feedback. This phase is also marked by the transition of the products from the first three phases of the business process design effort to the teams affected by the new processes – namely the impacted business areas as well as the organizational change management and information technology teams.

Phase 5 – Implementation and measurement. During the final phase of the business process design framework, the utility transitions from planning and design to implementation. The first step of this phase is to define the metrics and key performance indicators (KPIs) that will be used to measure the success of the new processes – necessary if organizations and managers are to be held responsible for the new processes, and for guiding continuous refinement and improvement. After these metrics have been established, the new organizational structure is put in place and the new processes are introduced to this structure.

BENEFITS AND CHALLENGES OF BUSINESS PROCESS DESIGN

The business process design framework outlined above facilitates the permeation of the utility goals and objectives throughout the entire organization. This effort does not succeed, though, without significant participation from internal stakeholders and strong sponsorship from key executives.

The benefits of this approach include the following:

  • It facilitates ownership. Because the management team is engaged at the beginning of the AMI transformation, managers are encouraged to own future state processes from initial design through implementation.
  • It identifies key issues. A comprehensive business design effort allows for earlier visibility into key integration issues and provides ample time to resolve them prior to rolling out the technologies to the field.
  • It promotes additional capabilities. The business process framework enables the utility to develop innovative ways to apply the AMI technology and ensures that future state processes are aligned to business outcomes.
  • It puts the focus on customers. A thorough business process effort ensures that the necessary processes and functional groups are put in place to empower and inform the utility customer.

The challenges of this approach include the following:

  • It entails a complex transition. The utility must manage the complexities and ambiguities of shifting from functional-based operations to process-based management and decision making.
  • It can lead to high expectations. The utility must also manage stakeholder expectations and be clear that change will be slow and painful. Revolutionary change is made through evolutionary steps – meaning that utilities cannot expect to take very large steps at any point in the process.
  • There may be technological limitations. Throughout the business process design effort, utilities will identify new ways to improve customer satisfaction through the use of AMI technology. The standard technology, however, may not always support these visions; thus, utilities must be prepared to work with vendors to support the new processes.

Although execution of future state business process design undoubtedly requires a high degree of effort, a successful operational transformation is necessary to truly leverage the features of AMI technology. If utilities expect to achieve broad-reaching benefits, they must put in place the operational and organization structures to support the transformational initiatives. Utilities cannot afford to think of AMI as a standard technology implementation or to jump immediately to the definition of system and technology requirements. This approach will inevitably limit the impact of AMI solutions and leave utilities implementing cutting-edge technology with fragmented processes and inflexible, functionally based organizational structures.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.