Modeling Distribution Demand Reduction by Chris Trayhorn, Publisher of mThink Blue Book, January 1, 2009 In the past, distribution demand reduction was a technique used only in emergency situations a few times a year – if that. It was an all-or-nothing capability that you turned on, and hoped for the best until the emergency was over. Few utilities could measure the effectiveness, let alone the potential of any solutions that were devised. Now, demand reduction is evolving to better support the distribution network during typical peaking events, rather than just emergencies. However, in this mode, it is important not only to understand the solution’s effectiveness, but to be able to treat it like any other dispatchable load-shaping resource. Advanced modeling techniques and capabilities are allowing utilities to do just that. This paper outlines various methods and tools that allow utilities to model distribution demand reduction capabilities within set time periods, or even in near real time. Electricity demand continues to outpace the ability to build new generation and apply the necessary infrastructure needed to meet the ever-growing, demand-side increases dictated by population growth and smart residences across the globe. In most parts of the world, electrical energy is one of the most important characteristics of a modern civilization. It helps produce our food, keeps us comfortable, and provides lighting, security, information and entertainment. In short, it is a part of almost every facet of life, and without electrical energy, the modern interconnected world as we know it would cease to exist. Every country has one or more initiatives underway, or in planning, to deal with some aspect of generation and storage, delivery or consumption issues. Additionally, greenhouse gases (GHG) and carbon emissions need to be tightly controlled and monitored. This must be carefully balanced with expectations from financial markets that utilities deliver balanced and secure investment portfolios by demonstrating fiduciary responsibility to sustain revenue projections and measured growth. The architects of today’s electric grid probably never envisioned the day when electric utility organizations would purposefully take measures to reduce the load on the network, deal with highly variable localized generation and reverse power flows, or anticipate a regulatory climate that impacts the decisions for these measures. They designed the electric transmission and distribution systems to be robust, flexible and resilient. When first conceived, the electric grid was far from stable and resilient. It took growth, prudence and planning to continue the expansion of the electric distribution system. This grid was made up of a limited number of real power and reactive power devices that responded to occasional changes in power flow and demand. However, it was also designed in a world with far fewer people, with a virtually unlimited source of power, and without much concern or knowledge of the environmental effects that energy production and consumption entail. To effectively mitigate these complex issues, a new type of electric utility business model must be considered. It must rapidly adapt to ever-changing demands in terms of generation, consumption, environmental and societal benefits. A grid made up of many intelligent and active devices that can manage consumption from both the consumer and utility side of the meter must be developed. This new business model will utilize demand management as a key element to the operation of the utility, while at the same time driving the consumer spending behavior. To that end, a holistic model is needed that understands all aspects of the energy value chain across generation, delivery and consumption, and can optimize the solution in real time. While a unifying model may still be a number of years away, a lot can be gained today from modeling and visualizing the distribution network to gauge the effect that demand reduction can – and does – play in near real time. To that end, the following solutions are surely well considered. Advanced Feeder Modeling First, a utility needs to understand in more detail how its distribution network behaves. When distribution networks were conceived, they were designed primarily with sources (the head of the feeder and substation) and sinks (the consumers or load) spread out along the distribution network. Power flows were assumed to be one direction only, and the feeders were modeled for the largest peak level. Voltage and volt-ampere reactive power (VAR) management were generally considered for loss optimization and not load reduction. There was never any thought given to limiting power to segments of the network or distributed storage or generation, all of which could dramatically affect the flow of the network, even causing reverse flows at times. Sensors to measure voltage and current were applied at the head of the feeder and at a few critical points (mostly in historical problem areas.) Planning feeders at most utilities is an exercise performed when large changes are anticipated (i.e., a new subdivision or major customer) or on a periodic basis, usually every three to five years. Loads were traditionally well understood with predictable variability, so this type of approach worked reasonably well. The utility also was in control of all generation sources on the network (i.e., peakers), and when there was a need for demand reduction, it was controlled by the utility, usually only during critical periods. Today’s feeders are much more complex, and are being significantly influenced by both generation and demand from entities outside the control of the utility. Even within the utility, various seemingly disparate groups will, at times, attempt to alter power flows along the network. The simple model of worst-case peaking on a feeder is not sufficient to understand the modern distribution network. The following factors must be considered in the planning model: Various demand-reduction techniques, when and where they are applied and the potential load they may affect; Use of voltage reduction as a load-shedding technique, and where it will most likely yield significant results (i.e., resistive load); Location, size and capacity of storage; Location, size and type of renewable generation systems; Use and location of plug-in electrical vehicles; Standby generation that can be fed into the network; Various social ecosystems and their characteristics to influence load; and Location and types of sensors available. Generally, feeders are modeled as a single unit with their power characteristic derived from the maximum peaking load and connected kilovolt-amperage (KVA) of downstream transformers. A more advanced model treats the feeder as a series of connected segments. The segment definitions can be arbitrary, but are generally chosen where the utility will want to understand and potentially control these segments differently than others. This may be influenced by voltage regulation, load curtailment, stability issues, distributed generation sources, storage, or other unique characteristics that differ from one segment to the next. The following serves as an advanced means to model the electrical distribution feeder networks. It provides for segmentation and sensor placement in the absence of a complete network and historical usage model. The modeling combines traditional electrical engineering and power-flow modeling with tools such as CYME and non-traditional approaches using geospatial and statistical analysis. The model builds upon information such as usage data, network diagrams, device characteristics and existing sensors. It then adds elements that could present a discrepancy with the known model such as social behavior, demand-side programs, and future grid operations based on both spatio-temporal and statistical modeling. Finally, suggestions can be made about sensors’ placement and characteristics to the network to support system monitoring once in place. Generally, a utility would take a more simplistic view of the problem. It would start by directly applying statistical analysis and stochastic modeling across the grid to develop a generic methodology for selecting the number of sensors, and where to place them based on sensor accuracy, cost and risk-of-error introduction from basic modeling assumptions (load allocation, timing of peak demand, and other influences on error.) However, doing so would limit the utility, dealing only with the data it has in an environment that will be changing dramatically. The recommended and preferred approach performs some analysis to determine what the potential error sources are, which source is material to the sensor question, and which could influence the system’s power flows. Next, an attempt can be made to geographically characterize where on the grid these influences are most significant. Then, a statistical approach can be applied to develop a model for setting the number, type and location of additional sensors. Lastly sensor density and placement can be addressed. Feeder Modeling Technique Feeder conditioning is important to minimize the losses, especially when the utility wants to moderate voltage levels as a load modification method. Without proper feeder conditioning and sufficient sensors to monitor the network, the utility is at risk of either violating regulatory voltage levels, or potentially limiting its ability to reduce the optimal load amount from the system during voltage reduction operations. Traditionally, feeder modeling is a planning activity that is done at periodic (for example, yearly) intervals or during an expected change in usage. Tools such as CYME – CYMDIST provide feeder analysis using: Balanced and unbalanced voltage drop analysis (radial, looped or meshed); Optimal capacitor placement and sizing to minimize losses and/or improve voltage profile; Load balancing to minimize losses; Load allocation/estimation using customer consumption data (kWh), distribution transformer size (connected kVA), real consumption (kVA or kW) or the REA method. The algorithm treats multiple metering units as fixed demands; and large metered customers as fixed load; Flexible load models for uniformly distributed loads and spot loads featuring independent load mix for each section of circuit; Load growth studies for multiple years; and Distributed generation. However, in many cases, much of the information required to run an accurate model is not available. This is either because the data does not exist, the feeder usage paradigm may be changing, the sampling period does not represent a true usage of the network, the network usage may undergo significant changes, or other non-electrical characteristics. This represents a bit of a chicken-or-egg problem. A utility needs to condition its feeders to change the operational paradigm, but it also needs operational information to make decisions on where and how to change the network. The solution is a combination of using existing known usage and network data, and combining it with other forms of modeling and approximation to build the best future network model possible. Therefore, this exercise refines traditional modeling with three additional techniques: geospatial analysis; statistical modeling; and sensor selection and placement for accuracy. If a distribution management system (DMS) will be deployed, or is being considered, its modeling capability may be used as an additional basis and refinement employing simulated and derived data from the above techniques. Lastly, if high accuracy is required and time allows, a limited number of feeder segments can be deployed and monitored to validate the various modeling theories prior to full deployment. The overall goals for using this type of technique are: Limit customer over or under voltage; Maximize returned megawatts in the system in load reduction modes; Optimize the effectiveness of the DMS and its models; Minimize cost of additional sensors to only areas that will return the most value; Develop automated operational scenarios, test and validation prior to system-wide implementation; and Provide a foundation for additional network automation capabilities. The first step starts by setting up a short period of time to thoroughly vet possible influences on the number, spacing and value offered by additional sensors on the distribution grid. This involves understanding and obtaining information that will most influence the model, and therefore, the use of sensors. Information could include historical load data, distribution network characteristics, transformer name plate loading, customer survey data, weather data and other related information. The second step is the application of geospatial analysis to identify areas of the grid most likely to have influences driving a need for additional sensors. It is important to recognize that within this step is a need to correlate those influential geospatial parameters with load profiles of various residential and commercial customer types. This step represents an improvement over simply applying the same statistical analysis generically over the entirety of the grid, allowing for two or more “grades” of feeder segment characteristics for which different sensor standards would be developed. The third step is the statistical analysis and stochastic modeling to develop recommended standards and methodology for determining sensor placement based on the characteristic segments developed from the geospatial assessment. Items set aside as not material for sensor placement serve as a necessary input to the coming “predictive model” exercise. Lastly, a traditional electrical and accuracy- based analysis is used to model the exact number and placement of additional sensors to support the derived models and planned usage of the system for all scenarios depicted in the model – not just summertime peaking. Conclusion The modern distribution network built for the smart grid will need to undergo significantly more detailed planning and modeling than a traditional network. No one tool is suited to the task, and it will take multiple disciplines and techniques to derive the most benefit from the modeling exercise. However, if a utility embraces the techniques described within this paper, it will not only have a better understanding of how its networks perform in various smart grid scenarios, but it will be better positioned to fully optimize its networks for load and loss optimization. Filed under: White Papers Tagged under: Christopher Couper, Demand Management, Demand Response, Distribution, Operations, Smart Grid, Utilities, White Papers About the Author Chris Trayhorn, Publisher of mThink Blue Book Chris Trayhorn is the Chairman of the Performance Marketing Industry Blue Ribbon Panel and the CEO of mThink.com, a leading online and content marketing agency. He has founded four successful marketing companies in London and San Francisco in the last 15 years, and is currently the founder and publisher of Revenue+Performance magazine, the magazine of the performance marketing industry since 2002.