Business Process Improvement

In the past, the utility industry could consider itself exempt from market drivers like those listed above. However, today’s utilities are immersed in a sea of change. Customers demand reliable power in unlimited supply, generated in environmentally friendly ways without increased cost. All the while regulators are telling consumers to “change the way they are using energy or be ready to pay more,” and the Department of Energy is calling for utilities to make significant reductions in usage by 2020 [1].

“The consumer’s concept of quality will no longer be measured by only the physical attributes of the product – it will extend to the process of how the product is made, including product safety, environmental compliance and social responsibility compliance.”

– Victor Fang, chairman of Li and Fang,
in the 2008 IBM CEO Study

If these issues are not enough, couple them with a loss of knowledge and skill due to an aging workforce, an ever-increasing amount of automation and technology being introduced into our infrastructure with few standards, tightening bond markets and economic declines requiring us to do more with less. Now more than ever the industry needs to redefine our core competencies, identify key customers and their requirements, and define processes that meet or exceed their expectations. Business process improvement is essential to ensure future success for utilities.

There is no need to reinvent the wheel and develop a model for utilities to address business process improvement. One already exists that offers the most holistic approach to process improvement today. It is not new, but like any successful management method, it has been modified and refined to meet continuously changing business needs.

It is agnostic in the way it addresses methods used for analysis and process improvement such as Lean, Six Sigma and other tools; but serves as a framework for achieving results in any industry. It is the Baldrige Criteria for Performance Excellence (see Figure 1).

The Criteria for Performance Excellence is designed to assist organizations to focus on strategy-driven performance while addressing key decisions driving both short-term and long-term organizational sustainability in a dynamic environment. Is it possible that this framework was designed for times such as these in the utility industry?

The criteria are essentially simple in design. They are broken into seven categories as shown in figure 2; leadership, strategic planning, customer focus, measurement, analysis and knowledge management, workforce focus, process management and results.

In this model, measurement, analysis and knowledge management establish the foundation. There are two triads. On the left hand side, leadership, strategic planning and customer focus make up the leadership triad. On the right hand side of the model, workforce focus, process management and results make up the results triad. The alignment and integration of these essential elements of business create a framework for continuous improvement. This model should appear familiar in concept to industry leaders; there is not a single utility in the industry that does not identify with these categories in some form.

The criteria are built to elicit a response through the use of how and what questions that ask about key processes and their deployment throughout the organization. On face value, these questions appear to be simple. However, as you respond to them, you will realize their linkage and begin to identify opportunities for improvement that are essential to future success. Leaders wishing to begin this effort should not be surprised by the depth of the questions and the relatively few members within your organization who will be able to provide complete answers.

In assessment of the model’s ability to meet utility industry needs, let’s discuss each category in greater detail, provide relevance to the utility industry and include key questions for you to consider as you begin to assess your own organization’s performance.

Leadership: Who could argue that the current demand for leadership in utilities is more critical today than ever before in our history? Changes in energy markets are bringing with them increased levels of accountability, a greater focus on regulatory, legal and ethical requirements, a need for long-term viability and sustainability, and increased expectations of community support. Today’s leaders are expected to achieve ever increasing levels of operational performance while operating on less margin than ever before.

“The leadership category examines how senior leaders’ personal actions guide and sustain the organization. Also examined are the organization’s governance system and how it fulfills legal, ethical and societal responsibilities as well as how it selects and supports key communities [2].”

Strategic Planning: Does your utility have a strategic plan? Not a dust-laden document sitting on a bookshelf or a financial budget; but a plan that identifies strategic objectives and action plans to address short and long-term goals. Our current business environment demands that we identify our core competencies (and more importantly what are not our core competencies), identify strategic challenges to organizational success, recognize strategic advantages and develop plans that ensure our efforts are focused on objectives that will ensure achievement of our mission and vision.

What elements of our business should we outsource? Do our objectives utilize our competitive advantages and core competencies to diminish organizational challenges? We all know the challenges that are both here today and await us just beyond the horizon. Many of them are common to all utilities; an aging workforce, decreased access to capital, technological change and regulatory change. How are we addressing them today and is our approach systematic and proactive or are we simply reacting to the challenges as they arise?

“The strategic planning category examines how your organization develops strategic objectives and action plans. Also examined are how your chosen strategic objectives and action plans are deployed and changed if circumstances require, and how progress is measured [2].”

Customer Focus: The success of the utility industry has been due in part to a long-term positive relationship with its customers. Most utilities have made a conscientious effort to identify and address the needs of the customer; however a new breed of customer is emerging with greater expectations, a higher degree of sensitivity to environmental issues, a diminished sense of loyalty to business organizations and overall suspicion of ethical and legal compliance.

Their preferred means of communication are quite different than the generations of loyal customers you have enjoyed in the past. They judge your performance against similar customer experiences received from organizations far beyond the traditional competitor.

You now compete against Wal-Mart’s supply chain process, Amazon.com’s payment processes and their favorite hotel chain’s loyalty rewards process. You are being weighed in the balances and in many cases found to be lacking. Worse yet, you may not have even recognized them as an emerging customer segment.

“The Customer Focus category examines how your organization engages its customers for long-term marketplace success and builds a customer-focused culture. Also examined is how your organization listens to the voice of its customers and uses this information to improve and identify opportunities for innovation [2].”

Measurement, Analysis, and Knowledge Management: The data created and maintained by GIS, CIS, AMI, SCADA and other systems create a wealth of information that can be analyzed to obtain knowledge sufficient to make rapid business decisions. However, many of these systems are incapable of or at the very least difficult to integrate with one another, leaving leaders with a lot of data but no meaningful measures of key performance. Even worse, a lack of standards related to system performance leaves many utilities that develop performance measures with a limited number of inconsistently measured comparatives from their peers.

If utilities are going to overcome the challenges of the future, it is essential that they integrate all data systems for improved accessibility and develop standards that would facilitate meaningful comparative measures. This is not to say that comparative measures do not exist, they do. However, increasing the number of utilities participating would increase our understanding of best practices and enable us to determine best-in-class performance.

“The measurement, analysis and knowledge management category examines how the organization selects, gathers, analyzes, manages and improves its data, information and knowledge assets and how it manages its information technology. The category also examines how your organization reviews and uses reviews to improve its performance [2].”

Workforce Focus: We have already addressed the aging workforce and its impact on the future of utilities. Companion challenges related to the utility workforce include the heavy benefits burdens that many utilities currently bear. Also, the industry faces a diminished interest in labor positions and the need to establish new training methods to engage a variety of generations within our workforce and ensure knowledge acquisition and retention.

The new workforce brings with it new requirements for satisfaction and engagement. The new employee has proven to be less loyal to the organization and studies show they will have many more employers before they retire than that of their predecessors. It is essential that we develop ways to identify these requirements and take action to retain these individuals or we risk increased training cost and operational issues as they seek new employment opportunities.

“The workforce focus category examines how your organization engages, manages and develops the workforce to utilize its full potential in alignment with organizational mission, strategy and action plans. The category examines the ability to assess workforce capability and capacity needs and to build a workforce environment conducive to high performance [2].”

Process Management: It is not unusual for utilities to implement new software with dramatically increased capabilities and ask the integrator to make it align with their current processes or continue to use their current processes without regard for the system’s new capabilities. Identifying and mapping key work processes can enable incredible opportunities for streamlining your organization and facilitate increased utilization of technology.

What are your utilities’ key work processes and how do you determine them and their relationship to creating customer value? These are difficult for leaders to articulate; but yet, without a clear understanding of key work processes and their alignment to core competencies and strategic advantages as well as challenges, it may be that your organization is misapplying efforts related to core competencies and either outsourcing something best maintained internally or performing effort that is better delivered by outsource providers.

“The process management category examines how your organization designs its work systems and how it designs, manages and improves its key processes for implementing these work systems to deliver customer value and achieve organizational success and sustainability. Also examined is your readiness for emergencies [2].”

Results: Results are the fruit of your efforts, the gift that the Baldrige Criteria enables you to receive from your applied efforts. All of us want positive results. Many utilities cite positive performance in measures that are easy to acquire: financial performance, safety performance, customer satisfaction. But which of these measures are key to our success and sustainability as an organization? As you answer the questions and align measures that are integral to obtaining your organization’s mission and vision, it will become abundantly clear which measures you’ll need to maintain and develop competitive comparisons and benchmarks.

“The results category examines the organization’s performance and improvement in all key areas – product outcomes, customer-focused outcomes, financial and market outcomes, workforce-focused outcomes, process-effectiveness outcomes and leadership outcomes. Performance levels are examined relative to those of competitors and other organizations with similar product offerings [2].”

A Challenge

The adoption of the Baldrige criteria is often described as a journey. Few utilities have embraced this model. However, it appears to offer a comprehensive solution to the challenges we face today. Utilities have a rich history and play a positive role in our nation. A period of rapid change is upon us. We need to shift from reacting to leading as we solve the problems that face our industry. By applying this model for effective process improvement, we can once again create a world where utilities lead the future.

References

  1. Quote from U.S. Treasury Secretary Tim Geithner as communicated in SmartGrid Newsletter
  2. Malcolm Baldrige National Quality Award, “Path to Excellence and Some path Building Tools.” www.nist.gov/baldrige.

Future of Learning

The nuclear power industry is facing significant employee turnover, which may be exacerbated by the need to staff new nuclear units. To maintain a highly skilled workforce to safely operate U.S. nuclear plants, the industry must find ways to expedite training and qualification, enhance knowledge transfer to the next generation of workers, and develop leadership talent to achieve excellent organizational effectiveness.

Faced with these challenges, the Institute of Nuclear Power Operations (INPO), the organization charged with promoting safety and reliability across the 65 nuclear electric generation plants operating in the U.S., created a “Future of Learning” initiative. It identified ways the industry can maintain the same high standard of excellence and record of nuclear safety, while accelerating training development, individual competencies and plant training operations.

The nuclear power industry is facing the perfect storm. Like much of the industrialized world, it must address issues associated with an aging workforce since many of its skilled workers and nuclear engineering professionals are hitting retirement age, moving out of the industry and beginning other pursuits.

Second, as baby boomers transition out of the workforce, they will be replaced by an influx of Generation Y workers. Many workers in this “millenials” generation are not aware of the heritage driving the single-minded focus on safety. They are asking for new learning models, utilizing the technologies which are so much a part of their lives.

Third, even as this big crew change takes place, there is increasing demand for electricity. Many are turning to cleaner technologies – solar, wind, and nuclear – to close the gap. And there is resurgence in requests for building new nuclear plants, or adding new reactors at existing plants. This nuclear renaissance also requires training and preparation to take on the task of safely and reliably operating our nuclear power plants.

It is estimated there will be an influx of 25,000 new workers in the industry over the next five years, with an additional 7,000 new workers needed if just a third of the new plants are built. Given that incoming workers are more comfortable using technology for learning, and that delivery models that include a blend of classroom-based, instructor-led, and Web-based methods can be more effective and efficient, the industry is exploring new models and a new mix of training.

INPO was created by the nuclear industry in 1979 following the Three Mile Island accident. It has 350 full-time and loaned employees. As a nonprofit organization, it is chartered to promote the highest levels of safety and reliability – in essence, to promote excellence – in the operation of nuclear electric generating plants. All U.S. nuclear operating companies are members.

INPO’s responsibilities include evaluating member nuclear site operations, accrediting each site’s nuclear training programs and providing assistance and information exchange. It has established the National Academy for Nuclear Training, and an independent National Nuclear Accrediting Board. INPO sends teams to sites to evaluate their respective training activities, and each station is reviewed at least every four years by the accrediting board.

INPO has developed guidelines for 12 specifically accredited programs (six operations and six maintenance/technical), including accreditation objectives and criteria. It also offers courses and seminars on leadership, where more than 1,500 individuals participate annually, from supervisors to board members. Lastly, it operates NANTeL (National Academy for Nuclear Training e-Learning system) with 200 courses for general employee training for nuclear access. More than 80,000 nuclear workers and sub-contractors have completed training over the Web.

The Future of Learning

In 2008, to systematically address workforce and training challenges, the INPO Future of Learning team partnered with IBM Workforce and Learning Solutions to conduct more than 65 one-on-one interviews, with chief executive officers, chief nuclear officers, senior vice presidents, plant managers, plant training managers and other leaders in the extended industry community. The team also completed 46 interviews with plant staff during a series of visits to three nuclear power plants. Lastly, the team developed and distributed a survey that was sent to training managers at the 65 nuclear plants, achieving a 62 percent response rate.

These are statements the team heard:

  • “Need to standardize a lot of the training, deliver it remotely, preferably to a desktop, minimize the ‘You train in our classroom in our timeframe’ and have it delivered more autonomously so it’s likely more compatible with their lifestyles.”
  • “We’re extremely inefficient today in how we design/develop and administer training. We don’t want to carry inefficiencies that we have today into the future.”
  • “Right now, in all training programs, it’s a one-size-fits-all model that’s not customized to an individual’s background. Distance learning would enable this by allowing people to demonstrate knowledge and let some people move at a faster pace.”
  • “We need to have ‘real’ e-learning. We’ve been exposed to less than adequate, older models of e-learning. We need to move away from ‘page turners’ and onto quality content.”

Several recommendations were generated as a result of the study. The first focused on ways to improve INPO’s current training offerings by adding leadership development courses, ratcheting up the interactivity of the Web-based and e-learning offerings in NANTeL and developing a “nuclear citizenship” course for new workers in the industry.

Second, there were recommendations about better utilizing training resources across the industry by centralizing common training, beginning with instructor training and certification and generic fundamentals courses. It was estimated that 50 percent of the accredited training materials are common across the industry. To accomplish this objective, INPO is exploring an industry infrastructure that would enable centralized training material development, maintenance and delivery.

The last set of recommendations focused on methods for better coordination and efficiency of training, including developing processes for certifying vendor training programs, and providing a jump-start to common community college and university curriculum.

In 2009, INPO is piloting a series of Future of Learning initiatives which will help determine the feasibility, cost-effectiveness, readiness and acceptance of this first set of recommendations. It is starting to look more broadly at ways it can utilize learning technology to drive economies of scale, accelerative and prescriptive learning, and deliver value to the nuclear electric generation industry.

Where Do We Go From Here ?

Beyond the initial perfect storm is another set of factors driving the future of learning.

First, consider the need for speed. It has been said that “If you are not learning at the speed of change, you are falling behind.”

In his “25 Lessons from Jack Welch,” the former CEO of General Electric said, “The desire, and the ability, of an organization to continuously learn from any source, anywhere – and to rapidly convert this learning into action – is its ultimate competitive advantage.” Giving individuals, teams and organizations the tools and technologies to accelerate and broaden their learning is an important part of the future of learning.

Second, consider the information explosion – the sheer volume of information available, the convenience of information access (due, in large part, to continuing developments in technology) and the diversity of information available. When there is too much information to digest, a person is unable to locate and make use of the information that one needs. When one is unable to process the sheer volume of information, overload occurs. The future of learning should enable the learner to sort through information and find knowledge.

Third, consider new developments in technology. Generations X and Y are considered “digital natives.” They expect that the most current technologies are available to them – including social networking, blogging, wikis, immersive learning and gaming – and to not have them is unthinkable.

Impact of New Technology

Philosophy of training has morphed from “just-in-case” (teach them everything and hope they will remember when they need it), to “just-in-time” (provide access to training just before the point of need), to “just-for-me.” With respect to the latter, learning is presented in a preferred media, with a learning path customized to reflect the student’s preferred learning style, and personalized to address the current and desired level of expertise within any given time constraint.

Imagine a scenario in which a maintenance technician at a nuclear plant has to replace a specialized valve – something she either hasn’t done for awhile, or hasn’t replaced before. In a Web 2.0 world, she should be able to run a query on her iPhone or similar handheld device and pull up the maintenance of that particular valve, access the maintenance records, view a video of the approved replacement procedure, or access an expert who could coach her through the process.

Learning Devices

What needs to be in place to enable this vision of the future of learning? First, workers will need a device that can access the information by connecting over a secure wireless network inside the plant. Second, the learning has to be available in small chunks – learning nuggets or learning assets. Third, the learning needs to be assembled along the dimensions of learning style, desired and target level of expertise, time available and media type, among other factors. Finally, experts need to be identified, tagged to particular tasks and activities, and made accessible.

Fortunately, some of the same learning technology tools that will enable centralized maintenance and accelerated development will also facilitate personalized learning. When training is organized at a more granular level – the learning asset level – not only can it be leveraged over a variety of courses and courseware, it can also be re-assembled and ported to a variety of outputs such as lesson books, e-learning and m-learning (mobile-learning).

The example above pointed out another shift in our thinking about learning. Traditionally, our paradigm has been that learning occurs in a classroom, and when it occurs, it has taken the form of a course. In the example above, the learning takes place anywhere and anytime, moving from the formal classroom environment to an informal environment. Of course, just because learning is “informal” does not mean it is accidental, or that it occurs without preparation.

Some estimates claim 10 percent of our learning is achieved through formal channels, 20 percent from coaching, and 70 percent through informal means. Peter Henschel, former director of the Institute for Research on Learning, raised an important question: If nearly three-quarters of learning in corporations is informal, can we afford to leave it to chance?

There are still several open issues regarding informal learning:

  • How do we evaluate the impact/effectiveness of informal learning? (Informal learning, but formal demonstration of competency/proficiency);
  • How do we record one’s participation and skill-level progression in informal learning? (Information learning, but formal recording of learning completion);
  • Who will create and maintain informal learning assets? (Informal learning, but formal maintenance and quality assurance of the learning content); and
  • When does informal learning need a formal owner (in a full- or part-time role)? (Informal learning, but will need formal policies to help drive and manage).
    • In the nuclear industry, accurate and up-to-date documentation is a necessity. As the nuclear industry moves toward more effective use of informal channels of learning, it will need to address these issues.

      Immersive Learning (Or Virtual Worlds)

      The final frontier for the future of learning is expansion into virtual worlds, also known as immersive learning. Although Second Life (SL) is the best known virtual world, there are also emerging competitors, including Active Worlds, Forterra (OLIVE), Qwag and Unisfair.

      Created in 2003 by Linden Lab of San Francisco, SL is a three-dimensional, virtual world that allows users to buy “property,” create objects and buildings and interact with other users. Unlike a game with rules and goals, SL offers an open-ended platform where users can shape their own environment. In this world, avatars do many of the same things real people do: work, shop, go to school, socialize with friends and attend rock concerts.

      From a pragmatic perspective, working in an immersive learning environment such as a virtual world provides several benefits that make it an effective alternative to real life:

      • Movement in 3-D space. A virtual world could be useful in any learning situation involving movement, danger, tactics, or quick physical decisions, such as emergency response.
      • Engendering Empathy. Participants experience scenarios from another person’s perspective. For example, the Future of Learning team is exploring ways to re-create the control room experience during the Three-Mile Island incident, to provide a cathartic experience for the next generation workforce so they can better appreciate the importance of safety and human performance factors.
      • Rapid Prototyping and Co-Design. A virtual world is an inexpensive environment for quickly mocking up prototypes of tools or equipment.
      • Role Playing. By conducting role plays in realistic settings, instructors and learners can take on various avatars and play those characters.
      • Alternate Means of Online Interaction. Although users would likely not choose a virtual world as their primary online communication tool, it provides an alternative means of indicating presence and allowing interaction. Users can have conversations, share note cards, and give presentations. In some cases, SL might be ideal as a remote classroom or meeting place to engage across geographies and utility boundaries.

      Robert Amme, a physicist at the University of Denver, has another laboratory in SL. Funded by a grant from the Nuclear Regulatory Commission, his team is building a virtual nuclear reactor to help train the next generation of environmental engineers on how to deal with nuclear waste (see Figure 1). The INPO Future of Learning team is exploring ways to leverage this type of learning asset as part of the nuclear citizenship initiative.

      There is no doubt that nuclear power generation is once again on an upswing, but critical to its revival and longevity will be the manner in which we prepare the current and next generation of workers to become outstanding stewards of a safe, effective, clean-energy future.

Modeling Distribution Demand Reduction

In the past, distribution demand reduction was a technique used only in emergency situations a few times a year – if that. It was an all-or-nothing capability that you turned on, and hoped for the best until the emergency was over. Few utilities could measure the effectiveness, let alone the potential of any solutions that were devised.

Now, demand reduction is evolving to better support the distribution network during typical peaking events, rather than just emergencies. However, in this mode, it is important not only to understand the solution’s effectiveness, but to be able to treat it like any other dispatchable load-shaping resource. Advanced modeling techniques and capabilities are allowing utilities to do just that. This paper outlines various methods and tools that allow utilities to model distribution demand reduction capabilities within set time periods, or even in near real time.

Electricity demand continues to outpace the ability to build new generation and apply the necessary infrastructure needed to meet the ever-growing, demand-side increases dictated by population growth and smart residences across the globe. In most parts of the world, electrical energy is one of the most important characteristics of a modern civilization. It helps produce our food, keeps us comfortable, and provides lighting, security, information and entertainment. In short, it is a part of almost every facet of life, and without electrical energy, the modern interconnected world as we know it would cease to exist.

Every country has one or more initiatives underway, or in planning, to deal with some aspect of generation and storage, delivery or consumption issues. Additionally, greenhouse gases (GHG) and carbon emissions need to be tightly controlled and monitored. This must be carefully balanced with expectations from financial markets that utilities deliver balanced and secure investment portfolios by demonstrating fiduciary responsibility to sustain revenue projections and measured growth.

The architects of today’s electric grid probably never envisioned the day when electric utility organizations would purposefully take measures to reduce the load on the network, deal with highly variable localized generation and reverse power flows, or anticipate a regulatory climate that impacts the decisions for these measures. They designed the electric transmission and distribution systems to be robust, flexible and resilient.

When first conceived, the electric grid was far from stable and resilient. It took growth, prudence and planning to continue the expansion of the electric distribution system. This grid was made up of a limited number of real power and reactive power devices that responded to occasional changes in power flow and demand. However, it was also designed in a world with far fewer people, with a virtually unlimited source of power, and without much concern or knowledge of the environmental effects that energy production and consumption entail.

To effectively mitigate these complex issues, a new type of electric utility business model must be considered. It must rapidly adapt to ever-changing demands in terms of generation, consumption, environmental and societal benefits. A grid made up of many intelligent and active devices that can manage consumption from both the consumer and utility side of the meter must be developed. This new business model will utilize demand management as a key element to the operation of the utility, while at the same time driving the consumer spending behavior.

To that end, a holistic model is needed that understands all aspects of the energy value chain across generation, delivery and consumption, and can optimize the solution in real time. While a unifying model may still be a number of years away, a lot can be gained today from modeling and visualizing the distribution network to gauge the effect that demand reduction can – and does – play in near real time. To that end, the following solutions are surely well considered.

Advanced Feeder Modeling

First, a utility needs to understand in more detail how its distribution network behaves. When distribution networks were conceived, they were designed primarily with sources (the head of the feeder and substation) and sinks (the consumers or load) spread out along the distribution network. Power flows were assumed to be one direction only, and the feeders were modeled for the largest peak level.

Voltage and volt-ampere reactive power (VAR) management were generally considered for loss optimization and not load reduction. There was never any thought given to limiting power to segments of the network or distributed storage or generation, all of which could dramatically affect the flow of the network, even causing reverse flows at times. Sensors to measure voltage and current were applied at the head of the feeder and at a few critical points (mostly in historical problem areas.)

Planning feeders at most utilities is an exercise performed when large changes are anticipated (i.e., a new subdivision or major customer) or on a periodic basis, usually every three to five years. Loads were traditionally well understood with predictable variability, so this type of approach worked reasonably well. The utility also was in control of all generation sources on the network (i.e., peakers), and when there was a need for demand reduction, it was controlled by the utility, usually only during critical periods.

Today’s feeders are much more complex, and are being significantly influenced by both generation and demand from entities outside the control of the utility. Even within the utility, various seemingly disparate groups will, at times, attempt to alter power flows along the network. The simple model of worst-case peaking on a feeder is not sufficient to understand the modern distribution network.

The following factors must be considered in the planning model:

  • Various demand-reduction techniques, when and where they are applied and the potential load they may affect;
  • Use of voltage reduction as a load-shedding technique, and where it will most likely yield significant results (i.e., resistive load);
  • Location, size and capacity of storage;
  • Location, size and type of renewable generation systems;
  • Use and location of plug-in electrical vehicles;
  • Standby generation that can be fed into the network;
  • Various social ecosystems and their characteristics to influence load; and
  • Location and types of sensors available.

Generally, feeders are modeled as a single unit with their power characteristic derived from the maximum peaking load and connected kilovolt-amperage (KVA) of downstream transformers. A more advanced model treats the feeder as a series of connected segments. The segment definitions can be arbitrary, but are generally chosen where the utility will want to understand and potentially control these segments differently than others. This may be influenced by voltage regulation, load curtailment, stability issues, distributed generation sources, storage, or other unique characteristics that differ from one segment to the next.

The following serves as an advanced means to model the electrical distribution feeder networks. It provides for segmentation and sensor placement in the absence of a complete network and historical usage model. The modeling combines traditional electrical engineering and power-flow modeling with tools such as CYME and non-traditional approaches using geospatial and statistical analysis.

The model builds upon information such as usage data, network diagrams, device characteristics and existing sensors. It then adds elements that could present a discrepancy with the known model such as social behavior, demand-side programs, and future grid operations based on both spatio-temporal and statistical modeling. Finally, suggestions can be made about sensors’ placement and characteristics to the network to support system monitoring once in place.

Generally, a utility would take a more simplistic view of the problem. It would start by directly applying statistical analysis and stochastic modeling across the grid to develop a generic methodology for selecting the number of sensors, and where to place them based on sensor accuracy, cost and risk-of-error introduction from basic modeling assumptions (load allocation, timing of peak demand, and other influences on error.) However, doing so would limit the utility, dealing only with the data it has in an environment that will be changing dramatically.

The recommended and preferred approach performs some analysis to determine what the potential error sources are, which source is material to the sensor question, and which could influence the system’s power flows. Next, an attempt can be made to geographically characterize where on the grid these influences are most significant. Then, a statistical approach can be applied to develop a model for setting the number, type and location of additional sensors. Lastly sensor density and placement can be addressed.

Feeder Modeling Technique

Feeder conditioning is important to minimize the losses, especially when the utility wants to moderate voltage levels as a load modification method. Without proper feeder conditioning and sufficient sensors to monitor the network, the utility is at risk of either violating regulatory voltage levels, or potentially limiting its ability to reduce the optimal load amount from the system during voltage reduction operations.

Traditionally, feeder modeling is a planning activity that is done at periodic (for example, yearly) intervals or during an expected change in usage. Tools such as CYME – CYMDIST provide feeder analysis using:

  • Balanced and unbalanced voltage drop analysis (radial, looped or meshed);
  • Optimal capacitor placement and sizing to minimize losses and/or improve voltage profile;
  • Load balancing to minimize losses;
  • Load allocation/estimation using customer consumption data (kWh), distribution transformer size (connected kVA), real consumption (kVA or kW) or the REA method. The algorithm treats multiple metering units as fixed demands; and large metered customers as fixed load;
  • Flexible load models for uniformly distributed loads and spot loads featuring independent load mix for each section of circuit;
  • Load growth studies for multiple years; and
  • Distributed generation.

However, in many cases, much of the information required to run an accurate model is not available. This is either because the data does not exist, the feeder usage paradigm may be changing, the sampling period does not represent a true usage of the network, the network usage may undergo significant changes, or other non-electrical characteristics.

This represents a bit of a chicken-or-egg problem. A utility needs to condition its feeders to change the operational paradigm, but it also needs operational information to make decisions on where and how to change the network. The solution is a combination of using existing known usage and network data, and combining it with other forms of modeling and approximation to build the best future network model possible.

Therefore, this exercise refines traditional modeling with three additional techniques: geospatial analysis; statistical modeling; and sensor selection and placement for accuracy.

If a distribution management system (DMS) will be deployed, or is being considered, its modeling capability may be used as an additional basis and refinement employing simulated and derived data from the above techniques. Lastly, if high accuracy is required and time allows, a limited number of feeder segments can be deployed and monitored to validate the various modeling theories prior to full deployment.

The overall goals for using this type of technique are:

  • Limit customer over or under voltage;
  • Maximize returned megawatts in the system in load reduction modes;
  • Optimize the effectiveness of the DMS and its models;
  • Minimize cost of additional sensors to only areas that will return the most value;
  • Develop automated operational scenarios, test and validation prior to system-wide implementation; and
  • Provide a foundation for additional network automation capabilities.

The first step starts by setting up a short period of time to thoroughly vet possible influences on the number, spacing and value offered by additional sensors on the distribution grid. This involves understanding and obtaining information that will most influence the model, and therefore, the use of sensors. Information could include historical load data, distribution network characteristics, transformer name plate loading, customer survey data, weather data and other related information.

The second step is the application of geospatial analysis to identify areas of the grid most likely to have influences driving a need for additional sensors. It is important to recognize that within this step is a need to correlate those influential geospatial parameters with load profiles of various residential and commercial customer types. This step represents an improvement over simply applying the same statistical analysis generically over the entirety of the grid, allowing for two or more “grades” of feeder segment characteristics for which different sensor standards would be developed.

The third step is the statistical analysis and stochastic modeling to develop recommended standards and methodology for determining sensor placement based on the characteristic segments developed from the geospatial assessment. Items set aside as not material for sensor placement serve as a necessary input to the coming “predictive model” exercise.

Lastly, a traditional electrical and accuracy- based analysis is used to model the exact number and placement of additional sensors to support the derived models and planned usage of the system for all scenarios depicted in the model – not just summertime peaking.

Conclusion

The modern distribution network built for the smart grid will need to undergo significantly more detailed planning and modeling than a traditional network. No one tool is suited to the task, and it will take multiple disciplines and techniques to derive the most benefit from the modeling exercise. However, if a utility embraces the techniques described within this paper, it will not only have a better understanding of how its networks perform in various smart grid scenarios, but it will be better positioned to fully optimize its networks for load and loss optimization.

Silver Spring Networks

When engineers built the national electric grid, their achievement made every other innovation built on or run by electricity possible – from the car and airplane to the radio, television, computer and the Internet. Over decades, all of these inventions have gotten better, smarter and cheaper while the grid has remained exactly the same. As a result, our electrical grid is operating under tremendous stress. The Department of Energy estimates that by 2030, demand for power will outpace supply by 30 percent. And this increasing demand for low-cost, reliable power must be met alongside growing environmental concerns.

Silver Spring Networks (SSN) is the first proven technology to enable the smart grid. SSN is a complete smart grid solutions company that enables utilities to achieve operational efficiencies, reduce carbon emissions and offer their customers new ways to monitor and manage their energy consumption. SSN provides hardware, software and services that allow utilities to deploy and run unlimited advanced applications, including smart metering, demand response, distribution automation and distributed generation, over a single, unified network.

The smart grid should operate like the Internet for energy, without proprietary networks built around a single application or device. In the same way that one can plug any laptop or device into the Internet, regardless of its manufacturer, utilities should be able to “plug in” any application or consumer device to the smart grid. SSN’s Smart Energy Network is based on open, Internet Protocol (IP) standards, allowing for continuous, two-way communication between the utility and every device on the grid – now and in the future.

The IP networking standard adopted by Federal agencies has proven secure and reliable over decades of use in the information technology and finance industries. This network provides a high-bandwidth, low-latency and cost-effective solution for utility companies.

SSN’s Infrastructure Cards (NICs) are installed in “smart” devices, like smart meters at the consumer’s home, allowing them to communicate with SSN’s access points. Each access point communicates with networked devices over a radius of one or two miles, creating a wireless communication mesh that connects every device on the grid to one another and to the utility’s back office.

Using the Smart Energy Network, utilities will be able to remotely connect or disconnect service, send pricing information to customers who can understand how much their energy is costing in real time, and manage the integration of intermittent renewable energy sources like solar panels, plug-in electric vehicles and wind farms.

In addition to providing The Smart Energy Network and the software/firmware that makes it run smoothly, SSN develops applications like outage detection and restoration, and provides support services to their utility customers. By minimizing or eliminating interruptions, the self-healing grid could save industrial and residential consumers over $100 billion per year.

Founded in 2002 and headquartered in Redwood City, Ca., SSN is a privately held company backed by Foundation Capital, Kleiner Perkins Caufield & Byers and Northgate Capital. The company has over 200 employees and a global reach, with partnerships in Australia, the U.K. and Brazil.

SSN is the leading smart grid solutions provider, with successful deployments with utilities serving 20 percent of the U.S. population, including Florida Power & Light (FPL), Pacific Gas & Electric (PG&E), Oklahoma Gas & Electric (OG&E) and Pepco Holdings, Inc. (PHI), among others.

FPL is one of the largest electric utilities in the U.S., serving approximately 4.5 million customers across Florida. In 2007, SSN and FPL partnered to deploy SSN’s Smart Energy Network to 100,000 FPL customers. It began with rigorous environmental and reliability testing to ensure that SSN’s technology would hold up under the harsh environmental conditions in some areas of Florida. Few companies are able to sustain the scale and quality of testing that FPL required during this deployment, including power outage notification testing, exposure to water and salt spray and network throughput performance test for self-healing failover characteristics.

SSN’s solution has met or exceeded all FPL acceptance criteria. FPL plans to continue deployment of SSN’s Smart Energy Network at a rate of one million networked meters per year beginning in 2010 to all 4.5 million residential customers.

PG&E is currently rolling out SSN’s Smart Energy Network to all 5 million electric customers over a 700,000 square-mile service area.

OG&E, a utility serving 770,000 customers in Oklahoma and western Arkansas, worked with SSN to deploy a small-scale pilot project to test The Smart Energy Network and gauge customer satisfaction. The utility deployed SSN’s network, along with an energy management web-based portal in 25 homes in northwest Oklahoma City. Another 6,600 apartments were given networked meters to allow remote initiation and termination of service.

Consumer response to the project was overwhelmingly positive. Participating residents said they gained flexibility and control over their household’s energy consumption by monitoring their usage on in-home touch screen information panels. According to one customer, “It’s the three A’s: awareness, attitude and action. It increased our awareness. It changed our attitude about when we should be using electricity. It made us take action.”

Based on the results, OG&E presented a plan for expanded deployment to the Oklahoma Corporation Commission for their consideration.

PHI recently announced its partnership with SSN to deliver The Smart Energy Network to its 1.9 million customers across Washington, D.C., Delaware, Maryland and New Jersey. The first phase of the smart grid deployment will begin in Delaware in March 2009 and involve SSN’s advanced metering and distribution automation technology. Additional deployment will depend on regulatory authorization.

The impact of energy efficiency is enormous. More aggressive energy efficiency efforts could cut the growth rate of worldwide energy consumption by more than half over the next 15 years, according to the McKinsey Global Institute. The Brattle Group states that demand response could reduce peak load in the U.S. by at least 5 percent over the next few years, saving over $3 billion per year in electricity costs. The discounted present value of these savings would be $35 billion over the next 20 years in the U.S. alone, with significantly greater savings worldwide.

Governments throughout the EU, Canada and Australia are now mandating implementation of alternate energy and grid efficiency network programs. The Smart Energy Network is the technology platform that makes energy efficiency and the smart grid possible. And, it is working in the field today.

Managing Communications Change

Change is being forced upon the utilities industry. Business drivers range from stakeholder pressure for greater efficiency to the changing technologies involved in operational energy networks. New technologies such as intelligent networks or smart grids, distribution automation or smart metering are being considered.

The communications network is becoming the key enabler for the evolution of reliable energy supply. However, few utilities today have a communications network that is robust enough to handle and support the exacting demands that energy delivery is now making.

It is this process of change – including the renewal of the communications network – that is vital for each utility’s future. But for the utility, this is a technological step change requiring different strategies and designs. It also requires new skills, all of which have been implemented in timescales that do not sit comfortably with traditional technology strategies.

The problems facing today’s utility include understanding the new technologies and assessing their capabilities and applications. In addition, the utility has to develop an appropriate strategy to migrate legacy technologies and integrate them with the new infrastructure in a seamless, efficient, safe and reliable manner.

This paper highlights the benefits utilities can realize by adopting a new approach to their customers’ needs and engaging a network partner that will take responsibility for the network upgrade, its renewal and evolution, and the service transition.

The Move to Smart Grids

The intent of smart grids is to provide better efficiency in the production, transport and delivery of energy. This is realized in two ways:

  • Better real-time control: ability to remotely monitor and measure energy flows more closely, and then manage those flows and the assets carrying them in real time.
  • Better predictive management: ability to monitor the condition of the different elements of the network, predict failure and direct maintenance. The focus is on being proactive to real needs prior to a potential incident, rather than being reactive to incidents, or performing maintenance on a repetitive basis whether it is needed or not.

These mechanisms imply more measurement points, remote monitoring and management capabilities than exist today. And this requires a greater reliance on reliable, robust, highly available communications than has ever been the case before.

The communications network must continue to support operational services independently of external events, such as power outages or public service provider failure, yet be economical and simple to maintain. Unfortunately, the majority of today’s utility communications implementations fall far short of these stringent requirements.

Changing Environment

The design template for the majority of today’s energy infrastructure was developed in the 1950s and 1960s – and the same is true of the associated communications networks.

Typically, these communications networks have evolved into a series of overlays, often of different technology types and generations (see Figure 1). For example, protection tends to use its own dedicated network. The physical realization varies widely, from tones over copper via dedicated time division multiplexing (TDM) connections to dedicated fiber connections. These generally use a mix of privately owned and leased services.

Supervisory control and data acquisitions systems (SCADA) generally still use modem technology at speeds between 300 baud to 9.6k baud. Again, the infrastructure is often copper or TDM running as one of many separate overlay networks.

Lastly, operational voice services (as opposed to business voice services) are frequently analog on yet another separate network.

Historically, there were good operational reasons for these overlays. But changes in device technology (for example, the evolution toward e-SCADA based on IP protocols), as well as the decreasing support by communications equipment vendors of legacy communications technologies, means that the strategy for these networks has to be reassessed. In addition, the increasing demand for further operational applications (for example, condition monitoring, or CCTV, both to support substation automation) requires a more up-to-date networking approach.

Tomorrow’s Network

With the exception of protection services, communications between network devices and the network control centers are evolving toward IP-based networks (see Figure 2). The benefits of this simplified infrastructure are significant and can be measured in terms of asset utilization, reduced capital and operational costs, ease of operation, and the flexibility to adapt to new applications. Consequently, utilities will find themselves forced to seriously consider the shift to a modern, homogeneous communications infrastructure to support their critical operational services.

Organizing For Change

As noted above, there are many cogent reasons to transform utility communications to a modern, robust communications infrastructure in support of operational safety, reliability and efficiency. However, some significant considerations should be addressed to achieve this transformation:

Network Strategy. It is almost inevitable that a new infrastructure will cross traditional operational and departmental boundaries within the utility. Each operational department will have its own priorities and requirements for such a network, and traditionally, each wants some, or total, control. However, to achieve real benefits, a greater degree of centralized strategy and management is required.

Architecture and Design. The new network will require careful engineering to ensure that it meets the performance-critical requirements of energy operations. It must maintain or enhance the safety and reliability of the energy network, as well as support the traffic requirements of other departments.

Planning, Execution and Migration. Planning and implementation of the core infrastructure is just the start of the process. Each service requires its own migration plan and has its own migration priorities. Each element requires specialist technical knowledge, and for preference, practical field experience.

Operation. Gone are the days when a communications failure was rectified by sending an engineer into the field to find the fault and to fix it. Maintaining network availability and robustness calls for sound operational processes and excellent diagnostics before any engineer or technician hits the road. The same level of robust centralized management tools and processes that support the energy networks have to be put in place to support communications network – no matter what technologies are used in the field.

Support. Although these technologies are well understood by the telecommunications industry, they are likely to be new to the energy utilities industry. This means that a solid support organization familiar with these technologies must be implemented. The evolution process requires an intense level of up-front skills and resources. Often these are not readily available in-house – certainly not in the volume required to make any network renewal or transformation effective. Building up this skill and resource base by recruitment will not necessarily yield staff that is aware of the peculiarities of the energy utilities market. As a result, there will be significant time lag from concept to execution, and considerable risk for the utility as it ventures alone into unknown territory.

Keys To Successful Engagement

Engaging a services partner does not mean ceding control through a rigid contract. Rather, it means crafting a flexible relationship that takes into consideration three factors: What is the desired outcome of the activity? What is the best balance of scope between partner assistance and in-house performance to achieve that outcome? How do you retain the flexibility to accommodate change while retaining control?

Desired outcome is probably the most critical element and must be well understood at the outset. For one utility, the desired outcome may be to rapidly enable the upgrade of the complete energy infrastructure without having to incur the upfront investment in a mass recruitment of the required new communications skills.

For other utilities, the desired outcome may be different. But if the outcomes include elements of time pressure, new skills and resources, and/or network transformation, then engaging a services partner should be seriously considered as one of the strategic options.

Second, not all activities have to be in scope. The objective of the exercise might be to supplement existing in-house capabilities with external expertise. Or, it might be to launch the activity while building up appropriate in-house resources in a measured fashion through the Build-Operate- Transfer (BOT) approach.

In looking for a suitable partner, the utility seeks to leverage not only the partner’s existing skills, but also its experience and lessons learned performing the same services for other utilities. Having a few bruises is not a bad thing – this means that the partner understands what is at stake and the range of potential pitfalls it may encounter.

Lastly, retaining flexibility and control is a function of the contract between the two parties which should be addressed in their earliest discussions. The idea is to put in place the necessary management framework and a robust change control mechanism based on a discussion between equals from both organizations. The utility will then find that it not only retains full control of the project without having to take day-to-day responsibility for its management, but also that it can respond to change drivers from a variety of sources – such as technology advances, business drivers, regulators and stakeholders.

Realizing the Benefits

Outsourcing or partnering the communications transformation will yield benefits, both tangible and intangible. It must be remembered that there is no standard “one-size-fits-all” outsourcing product. Thus, the benefits accrued will depend on the details of the engagement.

There are distinct tangible benefits that can be realized, including:

Skills and Resources. A unique benefit of outsourcing is that it eliminates the need to recruit skills not available internally. These are provided by the partner on an as-needed basis. The additional advantage for the utility is that it does not have to bear the fixed costs once they are no longer required.

Offset Risks. Because the partner is responsible for delivery, the utility is able to mitigate risk. For example, traditionally vendors are not motivated to do anything other than deliver boxes on time. But with a well-structured partnership, there is an incentive to ensure that the strategy and design are optimized to economically deliver the required services and ease of operation. Through an appropriate regime of business-related key performance indicators (KPIs), there is a strong financial incentive for the partner to operate and upgrade the network to maintain peak performance – something that does not exist when an in-house organization is used.

Economies of Scale. Outsourcing can bring the economies of scale resulting from synergies together with other parts of the partner’s business, such as contracts and internal projects.

There also are many other benefits associated with outsourcing that are not as immediately obvious and commercially quantifiable as those listed above, but can be equally valuable.

Some of these less tangible benefits include:

Fresh Point of View. Within most companies, employees often have a vested interest in maintaining the status quo. But a managed services organization has a vested interest in delivering the best possible service to the customer – a paradigm shift in attitude that enables dramatic improvements in performance and creativity.

Drive to Achieve Optimum Efficiency. Executives, freed from the day-to-day business of running the network, can focus on their core activities, concentrating on service excellence rather than complex technology decisions. To quote one customer, “From my perspective, a large amount of my time that might have in the past been dedicated to networking issues is now focused on more strategic initiatives concerned with running my business more effectively.”

Processes and Technologies Optimization. Optimizing processes and technologies to improve contract performance is part of the managed services package and can yield substantial savings.

Synergies with Existing Activities Create Economies of Scale. A utility and a managed services vendor have considerable overlap in the functions performed within their communications engineering, operations and maintenance activities. For example, a multi-skilled field force can install and maintain communications equipment belonging to a variety of customers. This not only provides cost savings from synergies with the equivalent customer activity, but also an improved fault response due to the higher density of deployed staff.

Access to Global Best Practices. An outsourcing contract relieves a utility of the time-consuming and difficult responsibility of keeping up to speed with the latest thinking and developments in technology. Alcatel-Lucent, for example, invests around 14 percent of its annual revenue into research and development; its customers don’t have to.

What Can Be Outsourced?

There is no one outsourcing solution that fits all utilities. The final scope of any project will be entirely dependent on a utility’s specific vision and current circumstances.

The following list briefly describes some of the functions and activities that are good possibilities for outsourcing:

Communications Strategy Consulting. Before making technology choices, the energy utility needs to define the operational strategy of the communications network. Too often communications is viewed as “plug and play,” which is hardly ever the case. A well-thought-out communications strategy will deliver this kind of seamless operation. But without that initial strategy, the utility risks repeating past mistakes and acquiring an ad-hoc network that will rapidly become a legacy infrastructure, which will, in turn, need replacing.

Design. Outsourcing allows utilities to evolve their communications infrastructure without upfront investment in incremental resources and skills. It can delegate responsibility for defining network architecture and the associated network support systems. A utility may elect to leave all technological decisions to the vendor and merely review progress and outcomes. Or, it may retain responsibility for technology strategy, and turn to the managed services vendor to turn the strategy into architecture and manage the subsequent design and project activities.

Build. Detailed planning of the network, the rollout project and the delivery of turnkey implementations all fall within the scope of the outsourcing process.

Operate, Administer and Maintain. Includes network operations and field and support services:

  • Network Operations. A vendor such as Alcatel-Lucent has the necessary experience in operating Network Operations Centers (NOCs), both on a BOT and ongoing basis. This includes handling all associated tasks such as performance and fault monitoring, and services management.
  • Network and Customer Field Services. Today, few energy utilities consider outside maintenance and provisioning activities to be a strategic part of their business and recognize they are prime candidates for outsourcing. Activities that can be outsourced include corrective and preventive maintenance, network and service provisioning, and spare parts management, return and repair – in other words, all the daily, time-consuming, but vitally important elements for running a reliable network.
  • Network Support Services. Behind the first-line activities of the NOC are a set of engineering support functions that assist with more complex faults – these are functions that cannot be automated and tend to duplicate those of the vendor’s. The integration and sharing of these functions enabled by outsourcing can significantly improve the utility’s efficiency.

Conclusion

Outsourcing can deliver significant benefits to a utility, both in terms of its ability to invest in and improve its operation and associated costs. However, each utility has its own unique circumstances, specific immediate needs, and vision of where it is going. Therefore, each technical and operational solution is different.

Alcatel-Lucent Your Smart Grid Partner

Alcatel-Lucent offers comprehensive capabilities that combine Utility industry – specific knowledge and experience with carrier – grade communications technology and expertise. Our IP/MPLS Transformation capabilities and Utility market – specific knowledge are the foundation of turnkey solutions designed to enable Smart Grid and Smart Metering initiatives. In addition, Alcatel-Lucent has specifically developed Smart Grid and Smart Metering applications and solutions that:

  • Improve the availability, reliability and resiliency of critical voice and data communications even during outages
  • Enable optimal use of network and grid devices by setting priorities for communications traffic according to business requirements
  • Meet NERC CIP compliance and cybersecurity requirements
  • Improve the physical security and access control mechanism for substations, generation facilities and other critical sites
  • Offer a flexible and scalable network to grow with the demands and bandwidth requirements of new network service applications
  • Provide secure web access for customers to view account, electricity usage and billing information
  • Improve customer service and experience by integrating billing and account information with IP-based, multi-channel client service platforms
  • Reduce carbon emissions and increase efficiency by lowering communications infrastructure power consumption by as much as 58 percent

Working with Alcatel-Lucent enables Energy and Utility companies to realize the increased reliability and greater efficiency of next-generation communications technology, providing a platform for, and minimizing the risks associated with, moving to Smart Grid solutions. And Alcatel-Lucent helps Energy and Utility companies achieve compliance with regulatory requirements and reductions in operational expenses while maintaining the security, integrity and high availability of their power infrastructure and services. We build Smart Networks to support the Smart Grid.

American Recovery and Reinvestment Act of 2009 Support from Alcatel-Lucent

The American Recovery and Reinvestment Act (ARRA) of 2009 was adopted by Congress in February 2009 and allocates $4.5 billion to the Department of Energy (DoE) for Smart Grid deployment initiatives. As a result of the ARRA, the DoE has established a process for awarding the $4.5 billion via investment grants for Smart Grid Research and Development, and Deployment projects. Alcatel-Lucent is uniquely qualified to help utilities take advantage of the ARRA Smart Grid funding. In addition to world-class technology and Smart Grid and Smart Metering solutions, Alcatel-Lucent offers turnkey assistance in the preparation of grant applications, and subsequent follow-up and advocacy with federal agencies. Partnership with Alcatel-Lucent on ARRA includes:

  • Design Implementation and support for a Smart Grid Network
  • Identification of all standardized and unique elements of each grant program
  • Preparation and Compilation of all required grant application components, such as project narratives, budget formation, market surveys, mapping, and all other documentation required for completion
  • Advocacy at federal, state, and local government levels to firmly establish the value proposition of a proposal and advance it through the entire process to ensure the maximum opportunity for success

Alcatel-Lucent is a Recognized Leader in the Energy and Utilities Market

Alcatel-Lucent is an active and involved leader in the Energy and Utility market, with active membership and leadership roles in key Utility industry associations, including the Utility Telecom Council (UTC), the American Public Power Association (APPA), and Gridwise. Gridwise is an association of Utilities, industry research organizations (e.g., EPRI, Pacific Northwest National Labs, etc.), and Utility vendors, working in cooperation with DOE to promote Smart Grid policy, regulatory issues, and technologies (see www.gridwise.org for more info). Alcatel-Lucent is also represented on the Board of Directors for UTC’s Smart Network Council, which was established in 2008 to promote and develop Smart Grid policies, guidelines, and recommended technologies and strategies for Smart Grid solution implementation.

Alcatel-Lucent IP MPLS Solution for the Next Generation Utility Network

Utility companies are experienced at building and operating reliable and effective networks to ensure the delivery of essential information and maintain flawless service delivery. The Alcatel-Lucent IP/MPLS solution can enable the utility operator to extend and enhance its network with new technologies like IP, Ethernet and MPLS. These new technologies will enable the utility to optimize its network to reduce both CAPEX and OPEX without jeopardizing reliability. Advanced technologies also allow the introduction of new Smart Grid applications that can improve operational and workflow efficiency within the utility. Alcatel-Lucent leverages cutting edge technologies along with the company’s broad and deep experience in the utility industry to help utility operators build better, next-generation networks with IP/MPLS.

Alcatel-Lucent has years of experience in the development of IP, MPLS and Ethernet technologies. The Alcatel-Lucent IP/MPLS solution offers utility operators the flexibility, scale and feature sets required for mission-critical operation. With the broadest portfolio of products and services in the telecommunications industry, Alcatel-Lucent has the unparalleled ability to design and deliver end-to-end solutions that drive next-generation utility networks.

About Alcatel-Lucent

Alcatel-Lucent’s vision is to enrich people’s lives by transforming the way the world communicates. As a leader in utility, enterprise and carrier IP technologies, fixed, mobile and converged broadband access, applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling communications services for people at work, at home and on the move.

With 77,000 employees and operations in more than 130 countries, Alcatel-Lucent is a local partner with global reach. The company has the most experienced global services team in the industry, and Bell Labs, one of the largest research, technology and innovation organizations focused on communications. Alcatel-Lucent achieved adjusted revenues of €17.8 billion in 2007, and is incorporated in France, with executive offices located in Paris.

Online Transient Stability Controls

For the last few decades the growth of the world’s population and its corresponding increased demand for electrical energy has created a huge increase in the supply of electrical power. However, for logistical, environmental, political and social reasons, this power generation is rarely near its consumers, necessitating the growth of very large and complex transmission networks. The addition of variable wind energy in remote locations is only exacerbating the situation. In addition the transmission grid capacity has not kept pace with either generation capacity or consumption while at the same time being extremely vulnerable to potential large-scale outages due to outdated operational capabilities.

For example, today if a fault is detected in the transmission system, the only course is to shed both load and generation. This is often done without consideration for real-time consequences or alternative analysis. If not done rapidly, it can result in a widespread, cascading power system blackout. While it is necessary to remove factors that might lead to a large-scale blackout, restriction of power flow or other countermeasures against such a failure, may only achieve this by sacrificing economical operation. Thus, the flexible and economical operation of an electric power system may often be in conflict with the requirement for improved supply reliability and system stability.

Limits of Off-line Approaches

One approach to solving this problem involves stabilization systems that have been deployed for preventing generator step-out by controlling the generator acceleration through power shedding, in which some of the generators are shut off at the time of a power system fault.

In 1975, an off-line special protection system (SPS) for power flow monitoring was introduced to achieve the transient stability of the trunk power system and power source system after a network expansion in Japan. This system was initially of the type for which settings were determined in advance by manual calculations using transient stability simulation programs assuming many contingencies on typical power flow patterns.

This type of off-line solution has the following problems:

  • Planning, design, programming, implementation and operational tasks are laborious. A vast number of simulations are required to determine the setting tables and required countermeasures, such as generator shedding, whenever transmission lines are constructed;
  • It is not well suited to variable generations sources such as wind or photovoltaic farms;
  • It is not suitable for reuse and replication, incurring high maintenance costs; and
  • Excessive travel time and related labor expense is required for the engineer and field staff to maintain the units at numerous sites.

By contrast, an online TSC solution employs various sensors that are placed throughout the transmission network, substations and generation sources. These sensors are connected to regional computer systems via high speed communications to monitor, detect and execute contingencies on transients that may affect system stability. These systems in turn are connected to centralized computers which monitor the network of distributed computers, building and distributing contingencies based on historical and recent information. If a transient event occurs, the entire ecosystem responds within 150 ms to detect, analyze, determine the correct course of action, and execute the appropriate set of contingencies in order to preserve the stability of the power network.

In recent years, high performance computational servers have been developed and their costs have been reduced enough to use many of them in parallel and/or in a distributed computing architecture. This results in a system that not only provides a benefit in greatly increasing the availability and reliability of the power system, but in fact, can best optimize the throughput of the grid. Thus not only has system reliability improved or remained stable, but the network efficiency itself has increased without a significant investment in new transmission lines. This has resulted in more throughput within the transmission grid, without building new transmission lines.

Solution and Elements

In 1995, for the first time ever, an online TSC system was developed and introduced in Japan. This solution provided a system stabilization procedure required by the construction of the new 500kV trunk networks of Chubu Electric Power Co. (CEPCO) [1-4]. Figure 1 shows the configuration of the online TSC system. This system introduced a pre-processing online calculation in the TSC-P (parent) besides a fast, post-event control executed by the combination of TSC-C (child) and TSC-T (terminal). This online TSC system can be considered an example of a self-healing solution of a smart grid. As a result of periodic simulations using the online data in TSC-P, operators of energy management systems/supervisory control and data acquisition (EMS/ SCADA) are constantly made aware of stability margins for current power system situations.

Using the same online data, periodic calculations performed in the TSC-P can reflect power network situations and the proper countermeasures to mitigate transient system events. The TSC-P simulates transient stability dynamics on about 100 contingencies of the power systems for 500 kV, 275 kV and 154 kV transmission networks. The setting tables for required countermeasures, such as generator shedding, are periodically sent to the TSC-Cs located at main substations. The TSC-Ts located at generation stations, shed the generators when the actual fault occurs. The actual generator shedding by the combination of TSC-Cs and TSC-Ts is completed within 150 ms after the fault to maintain the system’s stability.

Customer Experiences and Benefits

Figure 2 shows the locations of online TSC systems and their coverage areas in CEPCO’s power network. There are two online TSC systems currently operating; namely, the trunk power TSC system, to protect the 500 kV trunk power system introduced in 1995, and the power source TSC system to protect the 154 kV to 275 kV power source systems around the generation stations.

Actual performance data have shown some significant benefits:

  • Total transfer capability (TTC) is improved through elimination of transient stability limitations. TTC is decided by the minimum value of limitations given by not only thermal limit of transmission lines but transient stability, frequency stability, and voltage stability. Transient stability limits often determines the TTC in the case of long transmission lines from generation plants. CEPCO was able to introduce high-efficiency, combined-cycle power plants without constructing new transmission lines. TTC was increased from 1,500 MW to 3,500 MW by introducing the on-line TSC solution.
  • Power shedding is optimized. Not only is the power flow of the transmission line on which a fault occurs assessed, but the effects of other power flows surrounding the fault point are included in the analysis to decide the precise stability limit. The online TSC system can also reflect the constraints and priorities of each generator to be shed. To ensure a smooth restoration after the fault, restart time of shut off generators, for instance, can also be included.
  • When constructing new transmission lines, numerous off-line studies assuming various power flow patterns are required to support off-line SPS. After introduction of the online TSC system, new construction of transmission lines was more efficient by changing the equipment database for the simulation in the TSC-P.

In 2003, this CEPCO system received the 44th Annual Edison Award from the Edison Electric Institute (EEI), recognizing CEPCO’s achievement with the world’s first application of this type of system, and the contribution of the system to efficient power management.

Today, benefits continue to accrue. A new TSC-P, which adopts the latest high-performance computation servers, is now under construction for operation in 2009 [3]. The new system will shorten the calculation interval from every five minutes to every 30 seconds in order to reflect power system situations as precisely as possible. This interval was determined by the analysis of various stability situations recorded by the current TSC-P over more than 10 years of operation.

Additionally, although the current TSC-P uses the same online data as used by EMS/ SCADA, it can control emergency actions against small signal instability by receiving phasor measurement unit (PMU) data to detect divergences of phasor angles and voltages among the main substations.

Summary

The online TSC system is expected to realize optimum stabilization control of recent complicated power system conditions by obtaining power system information online and carrying out stability calculations at specific intervals. The online TSC will thus help utilities achieve better returns on investment in new or renovated transmission lines, reducing outage time and enabling a more efficient smart grid.

References

  1. Ota, Kitayama, Ito, Fukushima, Omata, Morita and Y. Kokai, “Development of Transient Stability Control System (TSC System) Based on Online Stability Calculation”, IEEE Trans. on Power System, Vol. 11, No. 3, pp. 1463-1472, August 1996.
  2. Koaizawa, Nakane, Omata and Y. Kokai, “Acutual Operating Experience of Online Transient Stability Control System (TSC System), IEEE PES Winter Meeting, 2000, Vol. 1, pp 84-89.
  3. Takeuchi, Niwa, Nakane and T. Miura
    “Performance Evaluation of the Online Transient Stability Control System (Online TSC System)”, IEEE PES General Meeting , June 2006.
  4. Takeuchi, Sato, Nishiiri, Kajihara, Kokai and M. Yatsu, “Development of New Technologies and Functions for the Online TSC System”, IEEE PES General Meeting , June 2006.

Successful Smart Grid Architecture

The smart grid is progressing well on several fronts. Groups such as the Grid Wise Alliance, events such as Grid Week, and national policy citations such as the American Recovery and Reinvestment Act in the U.S., for example, have all brought more positive attention to this opportunity. The boom in distributed renewable energy and its demands for a bidirectional grid are driving the need forward, as are sentiments for improving consumer control and awareness, giving customers the ability to engage in real-time energy conservation.

On the technology front, advances in wireless and other data communications make wide-area sensor networks more feasible. Distributed computation is certainly more powerful – just consider your iPod! Even architectural issues such as interoperability are now being addressed in their own forums such as Grid Inter-Op. It seems that the recipe for a smart grid is coming together in a way that many who envisioned it would be proud. But to avoid making a gooey mess in the oven, an overall architecture that carefully considers seven key ingredients for success must first exist.

Sources of Data

Utilities have eons of operational data: both real time and archival, both static (such as nodal diagrams within distribution management systems) and dynamic (such as switching orders). There is a wealth of information generated by field crews, and from root-cause analyses of past system failures. Advanced metering infrastructure (AMI) implementations become a fine-grained distribution sensor network feeding communication aggregation systems such as Silver Springs Network’s Utility IQ or Trilliant’s Secure Mesh Network.

These data sources need to be architected to be available to enhance, support and provide context for real-time data coming in from new intelligent electronic devices (IEDs) and other smart grid devices. In an era of renewable energy sources, grid connection controllers become yet another data source. With renewables, micro-scale weather forecasting such as IBM Research’s Deep Thunder can provide valuable context for grid operation.

Data Models

Once data is obtained, in order to preserve its value in a standard format, one can think in terms of an extensible markup language (XML)-oriented database. Modern implementations of these databases have improved performance characteristics, and the International Engineering Consortium (IEC) common information/ generic interface definition (CIM/GID) model, though oriented more to assets than operations, is a front-running candidate for consideration.

Newer entries, such as device language message specification – coincidence-ordered subsets expectation maximization (DLMS-COSEM) for AMI, are also coming into practice. Sometimes, more important than the technical implementation of the data, however, is the model that is employed. A well-designed data model not only makes exchange of data and legacy program adjustments easier, but it can also help the applicability of security and performance requirements. The existence of data models is often a good indicator of an intact governance process, for it facilitates use of the data by multiple applications.

Communications

Customer workshops and blueprinting sessions have shown that one of the most common issues needing to be addressed is the design of the wide-area communication system. Data communications architecture affects data rate performance, the cost of distributed intelligence and the identification of security susceptibilities.

There is no single communications technology that is suitable for all utilities, or even for all operational areas across any individual utility. Rural areas may be served by broadband over powerline (BPL), while urban areas benefit from multi-protocol label switching (MPLS) and purpose- designed mesh networks, enhanced by their proximity to fiber.

In the future, there could be entirely new choices in communications. So, the smart grid architect needs to focus on security, standardized interfaces to accept new technology, enablement of remote configuration of devices to minimize any touching of smart grid devices once installed, and future-proofing the protocols.

The architecture should also be traceable to the business case. This needs to include probable use cases that may not be in the PUC filing, such as AMI now, but smart grid later. Few utilities will be pleased with the idea of a communication network rebuild within five years of deploying an AMI-only network.

Communications architecture must also consider power outages, so battery backup, solar recharging, or other equipment may be required. Even arcane details such as “Will the antenna on a wireless device be the first thing to blow off in a hurricane?” need to be considered.

Security

Certainly, the smart grid’s purpose is to enhance network reliability, not lower its security. But with the advent of North American Reliability Corp. Critical Infrastructure Protection (NERC-CIP), security has risen to become a prime consideration, usually addressed in phase one of the smart grid architecture.

Unlike the data center, field-deployed security has many new situations and challenges. There is security at the substation – for example, who can access what networks, and when, within the control center. At the other end, security of the meter data in a proprietary AMI system needs to be addressed so that only authorized applications and personnel can access the data.

Service oriented architecture (SOA) appliances are network devices to enable integration and help provide security at the Web services message level. These typically include an integration device, which streamlines SOA infrastructures; an XML accelerator, which offloads XML processing; and an XML security gateway, which helps provide message-level, Web-services security. A security gateway helps to ensure that only authorized applications are allowed to access the data, whether an IP meter or an IED. SOA appliance security features complement the SOA security management capabilities of software.

Proper architectures could address dynamic, trusted virtual security domains, and be combined not only with intrusion protection systems, but anomaly detection systems. If hackers can introduce viruses in data (such as malformed video images that leverage faults in media players), then similar concerns should be under discussion with smart grid data. Is messing with 300 MegaWatts (MW) of demand response much different than cyber attacking a 300 MW generator?

Analytics

A smart grid cynic might say, “Who is going to look at all of this new data?” That is where analytics supports the processing, interpretation and correlation of the flood of new grid observations. One part of the analytics would be performed by existing applications. This is where data models and integration play a key role. Another part of the analytics dimension is with new applications and the ability of engineers to use a workbench to create their customized analytics dashboard in a self-service model.

Many utilities have power system engineers in a back office using spreadsheets; part of the smart grid concept is that all data is available to the community to use modern tools to analyze and predict grid operation. Analytics may need a dedicated data bus, separate from an enterprise service bus (ESB) or enterprise SOA bus, to meet the timeliness and quality of service to support operational analytics.

A two-tier or three-tier (if one considers the substations) bus is an architectural approach to segregate data by speed and still maintain interconnections that support a holistic view of the operation. Connections to standard industry tools such as ABB’s NEPLAN® or Siemens Power Technologies International PSS®E, or general tools such as MatLab, should be considered at design time, rather than as an additional expense commitment after smart grid commissioning.

Integration

Once data is sensed, securely communicated, modeled and analyzed, the results need to be applied for business optimization. This means new smart grid data gets integrated with existing applications, and metadata locked in legacy systems is made available to provide meaningful context.

This is typically accomplished by enabling systems as services per the classic SOA model. However, issues of common data formats, data integrity and name services must be considered. Data integrity includes verification and cross-correlation of information for validity, and designation of authoritative sources and specific personnel who own the data.

Name services addresses the common issue of an asset – whether transformer or truck – having multiple names in multiple systems. An example might be a substation that has a location name, such as Walden; a geographic information system (GIS) identifier such as latitude and longitude; a map name such as nearest cross streets; a capital asset number in the financial system; a logical name in the distribution system topology; an abbreviated logical name to fit in the distribution management system graphical user interface (DMS GUI); and an IP address for the main network router in the substation.

Different applications may know new data by association with one of those names, and that name may need translation to be used in a query with another application. While rewriting the applications to a common model may seem appealing, it may very well send a CIO into shock. While the smart grid should help propagate intelligence throughout the utility, this doesn’t necessarily mean to replace everything, but it should “information-enable” everything.

Interoperability is essential at both a service level and at the application level. Some vendors focus more at the service, but consider, for example, making a cell phone call from the U.S. to France – your voice data may well be code division multiple access (CDMA) in the U.S., travel by microwave and fiber along its path, and emerge in France in a global system for mobile (GSM) environment, yet your speech, the “application level data,” is retained transparently (though technology does not yet address accents!).

Hardware

The world of computerized solutions does not speak to software alone. For instance, AMI storage consolidation addresses the concern that the volume of data coming into the utility will be increasing exponentially. As more meter data can be read in an on-demand fashion, data analytics will be employed to properly understand it all, requiring a sound hardware architecture to manage, back-up and feed the data into the analytics engines. In particular, storage is needed in the head-end systems and the meter-data management systems (MDMS).

Head-end systems pull data from the meters to provide management functionality while the MDMS collects data from head-end systems and validates it. Then the data can be used by billing and other business applications. Data in both the head-end systems and the master copy of the MDMS is replicated into multiple copies for full back up and disaster recovery. For MDMS, the master database that stores all the aggregated data is replicated for other business applications, such as customer portal or data analytics, so that the master copy of the data is not tampered with.

Since smart grid is essentially performing in real time, and the electricity business is non-stop, one must think of hardware and software solutions as needing to be fail-safe with automated redundancy. The AMI data especially needs to be reliable. The key factors then become: operating system stability; hardware true memory access speed and range; server and power supply reliability; file system redundancy such as a JFS; and techniques such as FlashCopy to provide a point-in-time copy of a logical drive.

Flash Copy can be useful in speeding up database hot backups and restore. VolumeCopy can extend the replication functionality by providing the ability to copy contents of one volume to another. Enhanced remote mirroring (Global Mirror, Global Copy and Metro Mirror) can provide the ability to mirror data from one storage system to another, over extended distances.

Conclusion

Those are seven key ingredients for designing or evaluating a recipe for success with regard to implementing the smart grid at your utility. Addressing these dimensions will help achieve a solid foundation for a comprehensive smart grid computing system architecture.

Thinking Smart

For more than 30 years, Newton- Evans Research Company has been studying the initial development and the embryonic and emergent stages of what the world now collectively terms the smart, or intelligent, grid. In so doing, our team has examined the technology behind the smart grid, the adoption and utilization rates of this technology bundle and the related market segments for more than a dozen or so major components of today’s – and tomorrow’s – intelligent grid.

This white paper contains information on eight of these key components of the smart grid: control systems, smart grid applications, substation automation programs, substation IEDs and devices, advanced metering infrastructure (AMI) and automated meter-reading devices (AMR), protection and control, distribution network automation and telecommunications infrastructure.

Keep in mind that there is a lot more to the smart grid equation than simply installing advanced metering devices and systems. A large AMI program may not even be the correct starting point for hundreds of the world’s utilities. Perhaps it should be a near-term upgrade to control center operations or to electronic device integration of the key substations, or an initial effort to deploy feeder automation or even a complete production and control (P&C) migration to digital relaying technology.

There simply is not a straightforward roadmap to show utilities how to develop a smart grid that is truly in that utility’s unique best interests. Rather, each utility must endeavor to take a step back and evaluate, analyze and plan for its smart grid future based on its (and its various stakeholders’) mission, its role, its financial and human resource limitations and its current investment in modern grid infrastructure and automation systems and equipment.

There are multiple aspects of smart grid development, some of which involve administrative as well as operational components of an electric power utility, and include IT involvement as well as operations and engineering; administrative management of customer information systems (CIS) and geographic information systems (GIS) as well as control center and dispatching operation of distribution and outage management systems (DMS and OMS); substation automation as well as true field automation; third-party services as well as in-house commitment; and of course, smart metering at all levels.

Space Station

I have often compared the evolution of the smart grid to the iterative process of building the international space station: a long-term strategy, a flexible planning environment, responsive changes incorporated into the plan as technology develops and matures, properly phased. What function we might need is really that of a skilled smart grid architect to oversee the increasingly complex duties of an effective systems planning organization within the utility organization.

All of these soon-to-be-interrelated activities need to be viewed in light of the value they add to operational effectiveness and operating efficiencies as well as the effect of their involvement with one another. If the utility has not yet done so, it must strive to adopt a systems-wide approach to problem solving for any one grid-related investment strategy. Decisions made for one aspect of control and automation will have an impact on other components, based on the accumulated 40 years of utility operational insights gained in the digital age.

No utility can today afford to play whack-a-mole with its approach to the intelligent grid and related investments, isolating and solving one problem while inadvertently creating another larger or more costly problem elsewhere because of limited visibility and “quick fix” decision making.

As these smart grid building blocks are put into service, as they become integrated and are made accessible remotely, the overall smart grid necessarily becomes more complex, more communications-centric and more reliant on sensor-based field developments.

In some sense, it reminds one of building the space station. It takes time. The process is iterative. One component follows another, with planning on a system-wide basis. There are no quick solutions. Everything must be very systematically approached from the outset.

Buckets of Spending

We often tackle questions about the buckets of spending for smart grid implementations. This is the trigger for the supply side of the smart grid equation. Suppliers are capable of developing, and will make the required R&D investment in, any aspect of transmission and distribution network product development – if favorable market conditions exist or if market outlooks can be supported with field research. Hundreds of major electric power utilities from around the world have already contributed substantially to our ongoing studies of smart grid components.

In looking at the operational/engineering components of smart grid developments, centering on the physical grid itself (whether a transmission grid, a distribution grid or both), one must include what today comprises P&C, feeder and switch automation, control center-based systems, substation measurement and automation systems, and other significant distribution automation activities.

On the IT and administrative side of smart grid development, one has to include the upgrades that will definitely be required in the near- or mid-term, including CIS, GIS, OMS and wide area communications infrastructure required as the foundation for automatic metering. Based on our internal estimates and those of others, spending for grid automation is pegged for 2008 at or slightly above $1 billion nationwide and will approach $3.5 billion globally. When (if) we add in annual spending for CIS, GIS, meter data management and communications infrastructure developments, several additional billions of dollars become part of the overall smart grid pie.

In a new question included in the 2008 Newton-Evans survey of control center managers, these officials were asked to check the two most important components of near-term (2008-2010) work on the intelligent grid. A total of 136 North American utilities and nearly 100 international utilities provided their comments by indicating their two most important efforts during the planning horizon.

On a summary basis, AMI led in mentions from 48 percent of the group. EMS/ SCADA investments in upgrades, new applications, interfaces et al was next, mentioned by 42 percent of the group. Distribution automation was cited by 35 percent as well.

Spending Outlook

The financial environment and economic outlook do not bode well for many segments of the national and global economies. One question we have continuously been asked well into this year is whether the electric power industry will suffer the fate of other industries and significantly scale back planned spending on T&D automation because of possible revenue erosion given the slowdown and fallout from this year’s difficult industrial and commercial environments.

Let’s first take a summary look at each of the five major components of T&D automation because these all are part and parcel of the operations/engineering view of the smart grid of the future.

Control Systems Outlook: Driven by SCADA-like systems and including energy management systems and distribution management software, this segment of the market is hovering around the $500 million mark on a global scale – excluding the values of turn-key control center projects (engineering, procurement and construction (EPC) of new control center facilities and communications infrastructure). We see neither growth nor erosion in this market for the near-term, with some up-tick in spending for new applications software and better visualization tools to compensate for the “aging” of installed systems. While not a control center-based system, outage management is a closely aligned technology development, and will continue to take hold in the global market. Sales of OMS software and platforms are already approaching the $100 million mark led by the likes of Oracle Utilities, Intergraph and MilSoft.

Substation Automation and Integration Programs: The market for substation IEDs, for new communications implementations and for integration efforts has grown to nearly $500 million. Multiyear programs aimed at upgrading, integrating and automating the existing global base of about a quarter million or so transmission and primary distribution substations have been underway for some time. Some programs have been launched in 2008 that will continue into 2011. We see a continuation of the growth in spending for critical substation A&I programs, albeit 2009 will likely see the slowest rate of growth in several years (less than 3 percent) if the current economic malaise holds up through the year. Continuing emphasis will be on HV transmission substations as the first priority for upgrades and addition of more intelligent electronic devices.

AMI/AMR: This is the lynchpin for the smart grid in the eyes of many industry observers, utility officials and perhaps most importantly, regulators at the state and federal levels of the U.S., Canada, Australia and throughout Western Europe. With nearly 1.5 billion electricity meters installed around the world, and about 93 percent being electro-mechanical, interest in smart metering can also be found in dozens of other countries, including Indonesia, Russia, Honduras, Malaysia, Australia, and Thailand. Another form of smart meters, the prepayment meter, is taking hold in some of the developing nations of the world. The combined resources of Itron, coupled with its Actaris acquisition, make this U.S. firm the global share leader in sales and installations of AMI and AMR systems and meters.

Protection and Control: The global market for protective relays, the foundation for P&C has climbed well above $1.5 billion. Will 2009 see a drop in spending for protective relays? Not likely, as these devices continue to expand in capabilities, and undertake additional functions (sequence of event recording, fault recording and analysis, and even acting as a remote terminal unit). To the surprise of many, there is still a substantial amount (perhaps as much as $125 million) being spent annually for electro-mechanical relays nearly 20 years into the digital relay era. The North American leader in protective relay sales to utilities is SEL, while GE Multilin continues to hold a leading share in industrial markets.

Distribution Automation: Today, when we discuss distribution automation, the topic can encompass any and all aspects of a distribution network automation scheme, from the control center-based SCADA and distribution management system on out to the substation, where RTUs, PLCs, power meters, digital relays, bay controllers and a myriad of communicating devices now help operate, monitor and control power flow and measurement in the medium voltage ranges.

Nonetheless, it is beyond the substation fence, reaching further down into the primary and secondary network, where we find reclosers, capacitors, pole top RTUs, automated overhead switches, automated feeders, line reclosers and associated smart controls. These are the new smart devices that comprise the basic building blocks for distribution automation. The objective will be achieved with the ability to detect and isolate faults at the feeder level, and enable ever faster service restoration. With spending approaching $1 billion worldwide, DA implementations will continue to expand over the coming decade, nearing $2.6 billion in annual spending by 2018.

Summary

The T&D automation market and the smart grid market will not go away this year, nor will it shrink. When telecommunications infrastructure developments are included, about $5 billion will have been spent in 2008 for global T&D automation programs. When AMI programs are adding into the mix, the total exceeds $7 billion. T&D automation spending growth will likely be subdued, perhaps into 2010. However, the overall market for T&D automation is likely to be propped up to remain at or near current levels of spending for 2009 and into 2010, benefiting from the continued regulatory-driven momentum for AMI/ AMR, renewable portfolio standards and demand response initiatives. By 2011, we should once again see healthier capital expenditure budgets, prompting overall T&D automation spending to reach about $6 billion annually. Over the 2008-2018 periods, we anticipate more than $75 billion in cumulative smart grid expenditures.

Expenditure Outlook

Newton-Evans staff has examined the current outlook for smart grid-related expenditures and has made a serious attempt to avoid double counting potential revenues from all of the components of information systems spending and the emerging smart grid sector of utility investment.

While the enterprise-wide IT portions (blue and red segments) of Figure 1 include all major components of IT (hardware, software, services and staffing), the “pure” smart grid components tend to be primarily in hardware, in our view. Significant overlap with both administrative and operational IT supporting infrastructure is a vital component for all smart grid programs underway at this time.

Between “traditional IT” and the evolving smart grid components, nearly $25 billion will likely be spent this year by the world’s electric utilities. Nearly one-third of all 2009 information technology investments will be “smart grid” related.

By 2013, the total value of the various pie segments is expected to increase substantially, with “smart grid” spending possibly exceeding $12 billion. While this amount is generally understood to be conservative, and somewhat lower than smart grid spending totals forecasted by other firms, we will stand by our forecasts, based on 31 years of research history with electric power industry automation and IT topics.

Some industry sources may include the total value of T&D capital spending in their smart grid outlook.

But that portion of the market is already approaching $100 billion globally, and will likely top $120 billion by 2013. Much of that market would go on whether or not a smart grid is involved. Clearly, all new procurements of infrastructure equipment will be made with an eye to including as much smart content as is available from the manufacturers and integrators.

What we are limiting our definition to is edge investment, the components of the 21st century digital transport and delivery systems being added on or incorporated into the building blocks (power transformers lines, switchgear, etc.) of electric power transmission and delivery.

Is Your Mobile Workforce Truly Optimized?

ClickSoftware is the leading provider of mobile workforce management and service optimization solutions that create business value for service operations through higher levels of productivity, customer satisfaction and cost effectiveness. Combining educational, implementation and support services with best practices and its industry leading solutions, ClickSoftware drives service decision making across all levels of the organization.

Our mobile workforce management solution helps utilities empower mobile workers with accurate, real-time information for optimum service and quick on-site decision making. From proactive customer demand forecasting and capacity planning to real-time decision making, incorporating scheduling, mobility and location- based services, ClickSoftware helps service organizations get the most out of their resources.

The IBM/ClickSoftware alliance provides the most comprehensive offering for Mobile Workforce and Asset Management powering the real-time service enterprise. Customers can benefit from maximized workforce productivity and customer satisfaction while controlling, and then minimizing, operational costs.

ClickSoftware provides a flexible, scalable and proven solution that has been deployed at many utility companies around the world. Highlights include the ability to:

  • Automatically update the schedule based on real-time information from the field;
  • Manage crews (parts and people);
  • Cover a wide variety of job types within one product: from short jobs requiring one person to multi-stage jobs requiring a multi-person team over several days or weeks;
  • Balance regulatory, environmental and union compliance;
  • Continuously strive to raise the bar in operational excellence;
  • Incorporate street-level routing into the decision making process; and
  • Plan for the catastrophic events and seasonal variability in field service operations.

The resulting value proposition to the customer is extremely compelling:

  • Typically, optimized scheduling and routing of the mobile workforce generates a 31 percent increase in jobs per day vs. the industry average. (Source: AFSMI survey, 2003)
  • A variety of solutions, ranging from entry level to advanced, that directly address the broad spectrum of pains experienced by service organizations around the world, including optimized scheduling, routing, mobile communications and integration of solutions components – within the service optimization solution itself and also into the CRM/ERP/ EAM back end.
  • An entry level offering with a staged upgrade path toward a fully automated service optimization solution ensures that risk is managed and the most challenging of customer requirements may be met. This “least risk” approach for the customer is delivered by a comprehensive set of IBM business consulting, installation and support services.
  • The industry-proven credibility of ClickSoftware’s ServiceOptimization Suite, combined with IBM’s wireless middleware, software, hardware and business consulting services provides the customer with the most effective platform for managing its field service operations.

ClickSoftware’s customers represent a cross-section of leaders in the utilities, telecommunications, computer and office equipment, home services and capital equipment industries. Over 100 customers around the world have deployed ClickSoftware workforce and service optimization solutions and services to achieve optimal levels of field service.